From the lab


← Back to all Labs

Experiment

Shigeru Estamoto

Learning Machine Learning and at the same time getting a second opinion on how complex a software development task is? To apply basic AI on a daily work task - in the name of exploring.

Friday 24 January

Shigeru Estamoto!

Introduction

I’ve never dabbled in Machine Learning or AI, but it’s been on my radar for quite some time. Going into this we had some varying ideas of what we wanted to achieve during the week. Whether it was generating musical patterns using MIDI as a learning dataset, or classifying a user story’s complexity based its attributes, we all had a general idea of what we wanted to get out of it. A basic understanding of what machine learning is and how to apply it.

Early experiments

Machine Learning is hard. To get some sort of basic knowledge about how to work with it, we set out to create some basic applications. These included basic things like recognising patterns in numbers, and more silly things like teaching the model to differentiate sounds like people talking, and the noise an electrical drill makes. For science.

Data Collection & Classification

We use Pivotal Tracker a lot in our development process. It’s a tool, like many others, to keep track of your backlog of various tasks within a project. A lot of these tasks are given rough estimation of complexity on the fibonacci scale (0,1,2,3,5,8 …). We thought maybe we could teach a machine to estimate these tasks for us, based on its title.

We exported all historical data we had, massaged it a bit, and generated a dataset containing a title and the estimation points. We then used Python with Tensorflow to train a model to recognise a pattern between the text and number. There is a lot of trial and error when working with applied machine learning. A lot of parameters that can be tweaked that will yield different results.

A successful estimation of a couple of story titles

Generation

In parallel we also worked with generating titles of stories. Using the same process, with a different model, we tried to train it to generate tasks that sounded like tasks we usually create for our projects in pivotal tracker. What if we could create a backlog and estimate its complexity without lifting a finger? So much free time!

Let’s just say we had mixed results:

Not very efficient..

A new hope

We decided to go ahead with a different approach. I had previously read about OpenAI's text generation model GPT-2, and decided to give it a shot. GPT-2 uses an extreme amount of memory when training so in the end, to be able to train it within a reasonable time frame, we ended up using Google Colab where you can train these kind of models using public GPUs. The initial tests were rather disappointing:

Better, but still not what we want

But, after some more training we got something more interesting. Now it actually looks like it's trying at least. I suppose that's something.

Getting somewhere!

Some outliers still for sure.. The inner workings of machine learning remains a mystery. But after another 300 epochs (rounds) of training and some further tweaks to the generation script we finally got something we were quite excited about:

Artificial Intelligence!

Conclusion

Machine learning is still hard. Singularity is not here yet. The underlying math of AI and Machine Learning is done by a lot of incredibly smart people (not me), but thanks to them, very useful platforms has been made available so that we can apply it in creative ways.