Monday, 25 July 2016

MEETUP: "Product Management in Unusual Places" at ProductTank Brighton

Last week was an interesting edition of the ProductTank Brighton meetup hosted by Pure360. The topic was "Product Management in Unusual Places". Reading blogs and books it is easy to think that Product Management falls into two distinct groups of "B2C" and "B2B". Yet there are some more interesting cases lurking in there.

Faye Wakefield from Comic Relief started with a talk on "When everything MUST be alright on the night, how do you test, collect feedback and iterate?". Her situation is unusual as the focus for the year is pretty much on supporting seven hours of television once a year. Most of the donations come during this time. The key goal is to collect donations so the culture is risk averse in experimentation. So this is an extreme environment for risk appetite.


What was similar here was small user base interacting over time a large period of time. This is not suitable for doing A/B tests, or other experiments, to discover how to improve service. One way to overcome this is to use benchmarking and research from other similar organisations. Faye mentioned that she is lucky to have Children in Need, which has a similar model.

Glen Corbett was up next with "What does the PM for Rolls-Royce Wraith actually do?". In his case, he has a small user base so each unit matters. The users also have strong opinions and expectations from the product. A key skill for him seems to be the stakeholder management and saying "No". The strong brand around Rolls-Royce helps here. His background for how he got into Product Management was familiar to me. As well as the imposter syndrome he suffers. One of the things I love about ProductTank is getting to hear actual concerns and problems other people have. Rather than the perception that everyone has it sorted. Then feeling that what you are doing isn't that bad!


I guess that small user bases are common for a large slice of B2B products. Again it doesn't leave much room for large statistical analysis of experiments. the other factor is it doesn't allow feature development to be spread across the unit cost in any volume. So there isn't much room for waste. Good user research and validation then prioritisation is important!

Janna Bastow finished the talks with "PMing for the unique B2PM community." Having recently signed up with ProdPad it was interesting to hear more about the onboarding process. Particularly the thought about the thoughts behind the activity prompting email follow-up. A key takeaway was that "you are not your customer". Even when you are a Product Manager making a product for Product Managers to Product Manage. (I also learnt that Janna says the word "Product" a lot during her day ;). This is a key assumption to avoid for any development team. I think the standard "As a ... I want .. so that ..." story format allows too many assumptions/projections here. So I prefer the "jobs to be done" format to surface these. Often the user need differs from the bill payer's or even system supplier's. Think about password policies or billing systems.

Now I'm looking forward to the Summer Social to have more of a chance to chat with everyone!

Sunday, 17 July 2016

On documentation and audiences

In this post, I'd like to make a short plea for better product documentation.

One of the entry points to the Cronofy API documentation has an explicit link "for Product Managers". This link takes you to their use cases page. Over the past few years, I have looked at plenty of API documentation. From PDFs to the current trend of a GitHub repository and wiki. This was striking in how unusual it was. But if your product is an API then why should it be?

In the technology service industry should we go further? Should there also be a "for testers" link? This could be like the Cronofy Product Managers link. It could reuse existing information but highlight and target them for a specific audience. For example, take the developer sections about rate limits and validation. Then add testing tips about your API for integration testing.

Stripe has good developer documentation. Yet you have to scroll down a couple of pages of information before reaching the words "Not a developer?". Although one major plus point is that it does explain what the API does by then. This is often seen as a bonus. For example, the blog for Launch Any lists it as number 11 in 10 Questions Your API Documentation Must Answer.

 
The examples of good API documentation over at Documentor are clear. But they lack a certain humanity. I believe that documentation deserves the same kind of tailoring as marketing literature. That we write with empathy for the audience. Good marketing copy does this. It builds a bridge. It relates the product to the reader and their concerns. Shouldn't we use the same skills to inform? If we are already using them to persuade and sell.






Sunday, 10 July 2016

On spikes and learning

Photo by Sebastiaan ter Burg
So having a good roadmap with themes it is now important to get the work delivered somehow. Unless the developers have done something similar enough before. You need some way of discovering how to chunk up the work. What you call this doesn't really matter, but I have used the agile term "spike".  According to a comment in the Agile dictionary, this is a rock climbing term. A spike is driven into the rock face to help support the climbers. Although it does not get us closer to the top it allows us to go faster and have a safer climb. Likewise, a development spike doesn't produce the feature faster, but it provides a foundation to move forwards.

In a project kick-off meeting, I remarked about how successful the spike had been. A developer there joked that I should write a blog about it, so here it is... "challenge accepted". I have been reflecting on what I feel made the spike successful.

This particular spike for "feature X" was smooth and successful for two main reasons. The first is that we knew the need we were addressing and the problems associated with it. The second is that we had clarity of vision and common understanding.

To break down why we knew the need and problems, we had
  • Proactive user feedback and research to discover what might be missing.
  • Years of support and change requests for requirements driven by this need.
  • Coupled with years of support for the current feature in an operations mode.
As a base, we knew what customers thought they wanted, what they had asked for and the operational things that we had to avoid. This fed into the clarity of vision. Starting with some basic lo-fi screens we had a couple of meetings to explore our solution. This  high-level design got a common understanding of the required outcome. Which fed a set of properly designed wire-frames. Now we had the journeys and data mapped. The next step was to document our understanding, in the team, of the things we needed to learn.

The spike itself ran as expected and produced outputs for each of the scope questions. Although working software wasn't a requirement. Some proof of concept code explored these questions. The learning progressed with input from product, the project team, and operations. This is something that we all had a stake in getting right for long term success.

That was for current product and replacement of a feature that already existed. What about newer features? Pretty much the same. Yet for "feature Y" we needed an extra spike to do some of the learning about the problem. To be successful spikes need to be clear and focused. 

The "feature Y" spike was not as smooth. We encountered problems with our understanding of a third party API and we also found a bug in it. Perhaps that alone was a minor success! There wasn't as much clarity about what the eventual outcome would be. Although this became the number one thing that we learned from the spike. So for new or unfamiliar areas a spike is a useful way of investigating the user need and problems. This makes sense to me, for new things we need to explore and learn more. Once you have this framed it is possible to do another focused spike on the solution. Like you would for a feature extending the current product. Depending on the nature of the problem this may not be needed. You may be lucky and find that you already know enough about the solution.

I have found that in a lot of ways the least interesting property of spikes is the code. The language or tools used shouldn't matter. The key thing is "what did you learn about the problem?" or if you have an answer to that already "what did you learn about solving the problem?".

Sunday, 3 July 2016

On roadmaps and themes

The wrong kind of roadmap...
This post was inspired by a chance conversation from a developer, from another Brighton based software product firm. This occurred during The Lean Event. The conversation started during an audience participation section of Jared Spool's talk . I told him about trying to organise around themes and in exchange He told me about a lack of connection without that. This pleased me as it meant I was on the right track, but also reminded me not everyone has it sorted (even if you think they do from the outside). Unknown to me at the time I was sat at the table with Roman Pilcher! (more on him later)

In the past three years there have been three big influences on the way that I look at roadmaps. Well that and software development, they are (in chronological order):



  1. Gojko Adzic introduced me to Impact Mapping at one of the first Product Owner Survival Camps. First we learned about the importance of goals. Then being able to measure the impact of changes you make to meet them.
  2. On a more academic note I then took the Open University course Managing Technological Innovation. During this course I read the paper "Technology roadmapping—A planning framework for evolution and revolution" by Phaal et al (2004). In this paper they discuss the various roadmapping approaches. And then described how this works in different industries and organisation sizes. This was useful for looking at aligning technology and business disciplines. 
  3.  Finally on a project to refresh our product's user interface. I needed to prioritise and produce a roadmap for hundreds of features. This is when I found the template created by Roman Pilcher. This was useful for communicating what problem we were solving and why. 

Once you have all these elements then it is easier to create a narrative. First telling a story about the problems people have. Then how your product will evolve over time to solve those problems. 

Of course that is great, but one thing that you need to do is gain trust from your stakeholders. From the management, the team, and the customers. By delivering. After that you need to deliver again. And again. To start with don't worry about having too little in your releases. As long as it's what you said that you were going to focus on and it works. Even if it is basic functionality. Remember that you can improve this with user feedback and it helps avoid analysis paralysis. 

The themes are useful here for providing some kind of commitment in tackling a problem. Without having to be certain on the exact features or time scales. They allow some flexibility in the face of uncertainty of what you need to do. Additionally in what you might discover as you start user research before delivering.

I have found that this momentum and flow is the most important thing to get right. Without it you can have the most perfect ideas and  crafted user stories backed up with data, but it won't matter. Roadmaps only mean something if the items on them get delivered.

Think. Build. Revise. Repeat. 
 

Further Reading

Sources

Phaal, R., Farrukh, C.J. and Probert, D.R., 2004. Technology roadmapping—a planning framework for evolution and revolution. Technological forecasting and social change, 71(1), pp.5-26.

 


Building an onboarding process for a green field product

Photo by  Riku Lu  on  Unsplash Onboarding is an important part of B2C and pure "pay to play" SaaS. With so many tools to use,...