The 4 questions every Product Manager should keep asking themselves (part 1 of 2)
There are a million and one frameworks out there for Product Managers (PMs), and while many of them are incredibly useful, there are so many it can be easy to lose sight of the thing we’re here to do.
In this situation I often find myself going back to basics to ensure I’m not missing anything, and the following questions help me to do this:
Understanding the problem space:
What problem are we trying to solve? (And why?)
What assumptions are we making? Which are the riskiest?
Prioritising and making decisions:
What is most important to us when prioritising / making decisions?
How do we know whether we’re making good or bad decisions?
All these questions might sound simple, but often trying to fully answer them uncovers the complexity and challenges we encounter every day as PMs. I’m not suggesting that as PMs we should always have the absolute answer to all of these questions, but we should definitely be asking them and understanding how close (or far) we are from an answer. Often I find this will point me at where I need to look next.
Across two posts I’ll expand on my experience of answering (or trying to answer) these questions and share some of the activities and frameworks which I have found useful in each situation.
In this first article I’ll cover understanding the problem space (questions 1 and 2).
1. What problem are we trying to solve? (And why?)
This may seem like a very obvious question and, depending on the maturity of your product, it might actually be. However, I have experienced many situations where people thought they understood this and were aligned, but digging a little deeper into why we were solving the problem started to uncover gaps, questions and assumptions which hadn’t been considered.
Alternatively you might already know that you’re misaligned with some stakeholders on this question. Often a CEO or another senior stakeholder will have a ‘pet project’ or feature that they’re desperate to build (”all our competitors have X!”, or one particularly vocal user has demanded ‘X’), but you know, or at least sense, that it isn’t the most valuable thing to build.
Or your team might have have been given a list of features to build, without the context of what problem they’re solving. If you had the context of the problem, maybe you would suggest building something different, or approaching it in a different way.
Are you all aligned?
A very simple test of whether or not you have good alignment on this question is to write down your answer in a simple sentence or couple of bullet points, and see if your team and stakeholders agree. A more robust way of doing this is as a group activity where everyone has a go at writing an answer separately and then sharing back. This way you don’t bias each other and you can start to identify where you’re misaligned, or potentially even things you might have missed! I find the prompts below [figure 1] very useful:
You’re all aligned - great! What next?
If you’re all aligned - that’s great - but it’s worth a very quick check as to why you’re aligned, as there are two potential reasons:
You have a very strong leader who has instilled this vision strongly amongst the team - in this case it might be worth thinking about what evidence you have that your problem statements and impacts are true (if in doubt - move on to question 2 - see below!)
You have conducted sufficient user research to have built an evidence base to support your problem statement. In which case - you can get on with solving the problem and move to question 2!
We don’t agree! What do we do?
If you’re still misaligned, or don’t feel you have sufficient evidence to support your problem statement, this might be because:
You don’t understand the customer and their needs well enough yet. If this is the case I highly recommend some further discovery and research - speaking to users and mapping their experience & pain points. See Teresa Torres’ ‘Continuous Discovery Habits’ book for the best practical advice I’ve seen on this.
Or it might be that you don’t have enough evidence for the impact that solving this will have on your business - if so move on to question 2!
2. What assumptions are we making? Which are the riskiest?
Often we, or our stakeholders, can get hung up on an idea and forget to question something fundamental about it. We trust our gut feeling, but don’t always take the time to really examine it and assess whether we have the right evidence to back it up.
In the example from question 1 of the vocal stakeholder demanding ‘X’ feature, have we taken the time to understand if this is a problem a lot of our users face, or are we being swayed by the noisy few?
So how do we identify our biggest and riskiest assumptions?
In product management we generally separate assumptions into three categories:
Desirability (or value) - does this solve a real problem for our users? is the market big enough?
Feasibility - can we build this? do we have the right resources, technology, skills etc.?
Viability - is this something we can make work for our business? e.g. can we do it profitably? can we support it?
Step 1: Identify your assumptions (what assumptions are we making?)
You can do this simply by brainstorming assumptions using the desirability, feasibility and viability prompt questions above to ensure you’re covering all angles. It is critical to do this as a cross-functional team to ensure you capture assumptions across these three areas, and also because people from different backgrounds and disciplines will have different perspectives on what assumptions are being made - they might call you out on something you’d taken for granted!
If you have a business model canvas or similar, you can also use that as a starter and add your assumptions on stickies around that. Alternatively if you ran the exercise from question 1 (figure 1) you could use that as a starting point.
Step 2: Prioritise your assumptions (which are the riskiest?)
It’s likely that you’ll have uncovered a lot of assumptions - so the next step is to prioritise them by importance, and also by how much evidence you have for or against them [see figure 2]. I always think of this as “how bad would it be if we were wrong about this?”, and ”how likely is it that we’re wrong about this?”. Be sure that the answer to this second question is evidence based - not just your gut feeling. Going forward you should focus your efforts on the assumptions in the top right quadrant of the grid, where it’s bad if you’re wrong, and there’s a good chance you might be wrong!
We can also prioritise between desirability, feasibility and viability assumptions. Generally it’s a good idea to start with desirability - as if users don’t want something we probably shouldn’t be building it - so there’s not much cause to worry whether we can build it or make it viable for our business.
Step 3: Validating your assumptions (how can I test the riskiest assumptions?)
Essentially you want to gather sufficient evidence to move your riskiest assumptions from the top right quadrant in figure 2 (we know this is an important assumption, but we’re not confident enough that it’s true), to the top left quadrant (we’ve got good enough evidence that we’re correct in this assumption, or good enough evidence that we’ve disproved it). The important word here is “enough” - you don’t have to 100% prove or disprove something, just to reach the point where it is no longer your riskiest assumption. Once you’ve achieved that, move on to your next riskiest assumption.
Gathering evidence can take a number of forms - for example a technical spike or proof of concept, investigating analytics or other data, further user research or testing. For some value-based assumptions you may want to consider running experiments to really test user intent - as we know that what users tell us may differ from how they actually behave! Strategyzer’s ‘Testing Business Ideas’ book is a great reference point for value-testing experiments.
Summary
The above questions and frameworks have helped me to consistently ensure I’m framing the problem space I’m working in, and to highlight and test the riskiest assumptions associated with that. In the next article, I dive into questions 3 and 4 - which focus on prioritisation and decision making.
I’d love to hear from you about different tools and frameworks you use for answering these questions, or alternative questions you ask yourself as a PM.