The 4 questions every Product Manager should keep asking themselves (part 2 of 2)
Keeping track of the seemingly infinite number of product management frameworks can feel daunting, and it can sometimes be helpful to dive back in to why you’re using them in the first place.
In this situation I often find myself going back to basics to ensure I’m not missing anything, and the following questions help me to do this:
Understanding the problem space:
What problem are we trying to solve? (And why?)
What assumptions are we making? Which are the riskiest?
Prioritising and making decisions:
What is MOST important to us when prioritising / making decisions?
How do we know whether we’re making good or bad decisions?
In my previous article I expanded on questions 1 & 2 (understanding the problem space) in more detail, sharing the tools and tips which I have found useful in answering (or trying to answer) these questions. If you haven’t read that yet I recommend starting there.
In this article I’ll dive into questions 3 & 4, which focus on prioritising & making decisions.
3. What is MOST important to us when prioritising / making decisions?
“If everything is important, then nothing is” - Patrick Lencioni
Whether it is a big strategic challenge or a day-to-day product development question, there are any number of factors we could consider when making a decision about our product. If we don’t take time to consider what is most important, we risk a situation where we deem everything to be important and therefore are unable to prioritise effectively.
Obvious factors to consider when making a decision include what is important to our users, to our business etc. But depending on our situation or our strategy, some of these factors might be more or less important than others, so it is often more useful to think about a factor’s relative importance to something else.
As an example:
“Speed to market is more important for us than engineering for scalability because we are in a very competitive market and our strategy is to capture market share early.”
This doesn’t mean that engineering for scalability isn’t important - just that, in a situation where we might need to make a trade off between these two factors, we should prioritise (or at least favour) speed to market
The “because” is particularly important as often these things can go unspoken and it helps for the team to understand why we are not prioritising e.g. scalability - which they legitimately view as being important!
Even/over statements are an alternative way of describing relative importance. They are used to help frame a prioritisation call between two good things (e.g. scalability and speed to market in the above example). They are also employed in the Agile Manifesto, for example “Responding to change over following a plan”. This doesn’t mean planning is bad, just that responsiveness to change is more important. You can create these statements with your team by brainstorming all the factors you might use to make prioritisation decisions and then fitting them into blank ‘even/over’ statements.
Once you have agreed your relative priorities, it’s important to refer to these when making prioritisation calls or other decisions (design, architecture etc.) by explaining things like:
“We have focused on delivering Feature X in our next release because it was vital to bring it to market immediately. This means we won’t be delivering Feature Y now, as this is important, but lower priority.”
4. How do we know whether we’re making good or bad decisions?
We make product decisions based on the best information we have at the time. However, these are all assumptions until the feature is actually out in the world being used. Therefore, we must create and track success metrics to understand whether we have achieved our outcome.
Ideally, we’d always run rigorous statistical, controlled-group experiments to test features in the wild. If you have the resources to do that then great - do it!
If, however, you’re not in that situation (for example you’re building an internal tool with a much smaller user base, or you haven’t yet persuaded your organisation to invest in the right tooling), don’t give up! Something is better than nothing, and less rigorous methods can still help you assess your decision-making. The below quote from John Cutler hit home for me recently - as I think a lot of us also spend a lot of time trying to come up with the ‘perfect’ metric, rather than just getting going and measuring something.
Below I’ve outlined some tips from my experience building internal tools with small user groups, but which could be applied to any situation where running statistically significant controlled experiments isn’t possible.
Start with a hypothesis
The key, as with all experiments, is still to start with the hypothesis, and to define what success looks like before you release. This reduces the risk of people (your stakeholders, your team, even you), speculating wildly as to the reason why a metric has moved after the fact. Try using this structure to frame your hypothesis:
We believe that [releasing X feature] will result in [impact to metric or outcome] because [assumption]. We will know we have been successful if [the metric] increases by [Y]% or more.
Establish a baseline
If you’re using analytics data, make sure you have a good sense of the behaviour of your metric before you release the new feature. What is the average value? How much does it vary under normal circumstances? Is there a particular cycle (daily, weekly, monthly)? All of these will help you to minimise the risk that other factors are impacting your metric post-release.
Make sure you’re tracking a leading indicator
This ensures you can see any metric impact immediately (or very soon after release), and reduces the risk that any movement in the metric was caused by other factors. See this article by Tim Herbig for a good overview of leading & lagging indicators.
Choose your timing wisely
Avoid going live with the test at a time when the metric you are tracking is likely to be impacted by other factors where possible (e.g. seasonal variations such as Black Friday, Christmas etc.)
Think outside the box
If you can’t get your data from analytics, or if you want to back it up with other evidence, think about alternative ways to validate your hypothesis. This could be a pre- and post- release survey for your users. I’ve conducted shadowing sessions with users before and after releasing a new feature to literally time how long a certain action took them.
Reflect on your results, and potential biases
Based on the results you saw, and what you know of other factors which might have influenced them, reflect on your hypothesis and whether or not the data you’ve collected backs up your decision-making.
Summary
I hope that, next time you’re feeling overwhelmed by the number of tools and frameworks out there, these four questions and the tools and tips I’ve shared from my own experience help you to make sense of things.
I’d love to hear from you about your experiences of making and measuring your product decisions in challenging environments, and the tools and frameworks you use to help!