Sunday, 10 July 2016

On spikes and learning

Photo by Sebastiaan ter Burg
So having a good roadmap with themes it is now important to get the work delivered somehow. Unless the developers have done something similar enough before. You need some way of discovering how to chunk up the work. What you call this doesn't really matter, but I have used the agile term "spike".  According to a comment in the Agile dictionary, this is a rock climbing term. A spike is driven into the rock face to help support the climbers. Although it does not get us closer to the top it allows us to go faster and have a safer climb. Likewise, a development spike doesn't produce the feature faster, but it provides a foundation to move forwards.

In a project kick-off meeting, I remarked about how successful the spike had been. A developer there joked that I should write a blog about it, so here it is... "challenge accepted". I have been reflecting on what I feel made the spike successful.

This particular spike for "feature X" was smooth and successful for two main reasons. The first is that we knew the need we were addressing and the problems associated with it. The second is that we had clarity of vision and common understanding.

To break down why we knew the need and problems, we had
  • Proactive user feedback and research to discover what might be missing.
  • Years of support and change requests for requirements driven by this need.
  • Coupled with years of support for the current feature in an operations mode.
As a base, we knew what customers thought they wanted, what they had asked for and the operational things that we had to avoid. This fed into the clarity of vision. Starting with some basic lo-fi screens we had a couple of meetings to explore our solution. This  high-level design got a common understanding of the required outcome. Which fed a set of properly designed wire-frames. Now we had the journeys and data mapped. The next step was to document our understanding, in the team, of the things we needed to learn.

The spike itself ran as expected and produced outputs for each of the scope questions. Although working software wasn't a requirement. Some proof of concept code explored these questions. The learning progressed with input from product, the project team, and operations. This is something that we all had a stake in getting right for long term success.

That was for current product and replacement of a feature that already existed. What about newer features? Pretty much the same. Yet for "feature Y" we needed an extra spike to do some of the learning about the problem. To be successful spikes need to be clear and focused. 

The "feature Y" spike was not as smooth. We encountered problems with our understanding of a third party API and we also found a bug in it. Perhaps that alone was a minor success! There wasn't as much clarity about what the eventual outcome would be. Although this became the number one thing that we learned from the spike. So for new or unfamiliar areas a spike is a useful way of investigating the user need and problems. This makes sense to me, for new things we need to explore and learn more. Once you have this framed it is possible to do another focused spike on the solution. Like you would for a feature extending the current product. Depending on the nature of the problem this may not be needed. You may be lucky and find that you already know enough about the solution.

I have found that in a lot of ways the least interesting property of spikes is the code. The language or tools used shouldn't matter. The key thing is "what did you learn about the problem?" or if you have an answer to that already "what did you learn about solving the problem?".

First mistakes and successes in the Bashfully MVP process

Since writing about Public beta and starting the MVP process I have gained more insights about our Bashfully launch. These are around what...