The RAVE Model: prioritising for return and strategy
This updated version of RICE will help you make decisions now, without compromising your long-term approach.
Prioritising ideas is one of the hardest and most important tasks in product management. However you work, your aim is to enable a focus on the desirable, viable and feasible, while scanning multiple horizons. Those working in more traditional settings have to do this while swimming in HiPPO-infested waters, traversing legacy systems, and navigating annual budgets and target cycles: it’s tough out there.
Product people love frameworks, so a number of well-known approaches have been developed over the years to support prioritisation. While these vary in complexity, all broadly aim to:
Support the objective assessment of competing ideas through rankings or groupings
Communicate these outputs in a digestible way
Create alignment within the product team and with stakeholders.
During my career, I’ve tried and applied a range of these frameworks. They’ve helped me shift the thinking of stakeholders, radiate a sense of order and confidence, and check my own biases. Like most people, I’ve found that they work best as lenses for evaluation, rather than as laws to follow without question. Yet, in every case, I’ve found that these tools struggle to balance ‘quick-wins’ with longer-term bet-making in service of strategy and a product vision.
So, because our current prioritisation models fail to drive strategic alignment, I’ve created my own to help.
On Impact v Effort and RICE
Before I explain my approach, let’s look at a couple of popular frameworks for context.
Impact v Effort
The Impact v Effort four-box grid is a great way to initiate sensible discussions. If you can align around shared objectives, then blind-alley pet-projects will fall by the wayside very quickly. Wonderful. But it’s also quite easy to run the model without actually clarifying these objectives, which hampers its ability to drive performance.
Equally, though Impact v Effort is great at weeding out Low Impact activities, it’s also designed to promote Low Effort work. True, we should go for quicker activities when all things are equal… but they so very rarely are. If we look towards the incremental too often, we will naturally have less time for more ambitious work.
For this reason, I may utilise Impact v Effort for collaborative, early stage sense checking, but no more.
RICE
Though less elegant to visualise, the RICE Model is more substantive than Impact v Effort:
Reach x Impact x Confidence / Effort = RICE Score
I’ve leant on RICE a lot. But when working in a complex, revenue-focused area, I began to see its flaws.
For example, I understand Reach to mean the number of users, or potential users, that will experience a feature/ modification/ new thing, with the Impact the degree of behavioural change triggered by this new thing. But the formula over-emphasises audience volume versus the amount of change per head. It’s therefore geared towards engagement-style outputs and is less suitable when other outcomes are desirable, or when some segments are of greater value than others. Assuming that some users are more equal than others, this isn’t a great way to drive real performance.
Let’s run two quick tests to demonstrate:
- Magazine subscriptions
Here, we’re optimising the funnel performance for a subscription magazine with free and paid tiers. Free users outweigh paying customers 10:1. Our lagging metric is Customer Lifetime Value. Without referencing our actual business metrics, we could wind up endlessly tweaking the Free experience over the Paid space, as follows:
Trivial change for large ‘Free’ segment: (R = 100,000) x (I =2) x (C = 1) / (E = 10) = RICE Score of 20,000
vs
Significant change for small ‘Paid’ segment: (R = 10,000) x (I = 8) x (C = 1) / (E = 10) = RICE Score of 8,000
- Service demand
This time, we’re trying to shift service demand from phone to digital for the broker partners of an established insurance company. The SaaS platform is built on a creaking legacy system and the aim is to switch to a microservices architecture. Should we look for low-hanging improvements, or take the harder path towards more substantive improvements for the user?
In-year incremental improvement: (R = 20,000) x (I =2) x (C = 1) / (E = 10) = RICE Score of 4,000
vs
Multi-year architectural improvement: (R = 20,000) x (I = 8) x (C = 0.8) / (E = 40) = RICE Score of 3,200
Going for a longer-term, bigger bet with slightly higher risk but a much greater upside would not pass here. Not only does the RICE Model ignore the fact that microservice architectures enable a framework for faster improvements going forwards, it actually underweights the Impact on users.
Introducing the RAVE Model
As product managers, we have to make, guide and advise on hard choices. We need to balance short imperatives with the long-term vision. Strategy can connect these horizons together and our prioritisation methods should reflect this.
My response to this problem is a modified version of RICE. I call it the RAVE Model.
Return x Alignment to strategy x Validation / Effort = RAVE Score
Why Return?
We’ve seen how Reach is over-indexed within RICE, which can lead to poor decision making. So, I’ve combined it with Impact in the single compound metric of Return. The denominations of Return can vary, depending on the ‘game’ you’re playing. For example:
Return =
Transactional: ‘Estimated Incremental Revenue + Estimated Incremental Savings’
Productivity: ‘User hours saved + Call centre hours saved’
Engagement: ‘Estimated Incremental Hours on Site’
Alignment to strategy
By explicitly adding strategy, we can review where ideas drive towards or distract from the product vision and/or the next target state.
Rather than utilise a basic 0-1 scale, those items which truly accelerate us towards the product vision are purposefully over-weighted. Because items that are not aligned to our strategy take away focus and energy, they receive a score that’s less than 1.
3 = massive alignment
2 = high alignment
1 = medium alignment
0.5 = low alignment
0.25 = minimal alignment
Tip: Avoid committing to any item with an Alignment score below 1. Such decisions are sometimes required for regulatory purposes or to build momentum, but they should not be taken lightly.
Validation
The 0-1 approach used in RICE holds true here. But I’ve shifted from Confidence to Validated.
Where Confidence implies emotion and a lack of rigour, Validation is firm and demands objectivity. As such, the bar is higher and ‘100%’ is less likely to be hit, which drives enquiry and experiment design.
Tip: Don’t take any item with a Validation score below 7 beyond the discovery phase - especially if the Effort level is high. Instead, do the work: create smaller, experimental items specifically designed to prove/ disprove the idea, or look for an incremental deliverable.
Effort
In the RICE Model, Effort often denotes the expected person-hours required for each idea. However, it’s unlikely that you can be that specific, so I suggest switching up to days or months. You should also use the Fibonacci Scale (1, 2, 3, 5, 8, 13 etc.) when estimating here, for three reasons:
It tacitly recognises that estimation is speculative
It focuses on relative size
It’s widely used in agile planning: using familiar techniques supports team buy-in and increases speed.
Putting it all together
So, in summary:
Return x Alignment to Strategy x Validated / Effort = RAVE Score
Return (denominations of your choice)
Alignment to strategy (0.25-3)
Validation (0-1)
Effort (Fibonacci: 1,2,3,5,8,13,21,34,55)
Now, let’s return to our example ideas.
- Magazine subscriptions
I’m going to assume that each paid subscriber is worth £20 per year and that they stay with us for an average of 3 years. We think that the changes to the Free experience will convert an additional 500 users to the Paid tier and that the changes for the Paid tier will change the lifespan to 4 years. Neither route will bring significant cost savings and I have assumed that both options are strategically aligned for simplicity.
‘Free’: (R = 500 x 3 years) x (A = 1) x (V = 1) / (E =10) = RAVE Score of 150
vs
‘Paid’: (R = 10,000 x 1 year) x (A = 1) x (V = 1) / (E =10) = RAVE Score of 1,000
With the RAVE Model in place, we can see that a focus on a specific segment will drive the bottom line.
- Service demand
For the insurance company, we’ll combine our inputs for Reach and Impact in the Return element, but introduce the Alignment metric.
Incremental improvement: (R = 20,000 x 2) x (A = 0.5) x (V = 1) / (E = 10) = RAVE Score of 2,000
vs
Architectural change: (R = 20,000 x 8) x (A = 2) x (V = 0.8) / (E = 40) = RAVE Score of 6,400
Now, the strategic bet with the larger potential upside is the winner. True, the time/ cost to reach this point is greater, but the long-term gains will be significant and compound. Validation levels can be tightened en route.
Get started
The RAVE Model requires more consideration of the metrics that truly matter than RICE and Impact vs Effort, It also has the following benefits:
Focusing on Return enables flexibility while driving towards key product metrics
Introducing Alignment to Strategy keeps the product vision in sight and enables discussion about near vs longer-term horizons
Switching from Confidence to Validation promotes discovery and evidence-based decision making
Adopting the Fibonacci scale for Effort speeds up scoring and tacitly recognises that all early stage estimates are speculative.
Using the RAVE Model while working with a traditional corporate organisation enabled me to make small but significant changes to how the merits of competing ideas were perceived, sequenced and funded. I believe its application would be even greater in leaner environments, where strategy is less likely to be subsumed by departmental silos and annual budget cycles.
If you’re currently using RICE, or another prioritisation framework, why not run your ideas through the RAVE Model and see how things look?
Hi Tim, great read. I can see the benefit of the new framework.
The Fibonacci sizing as the denominator could exponentially deflate the score of some important, but heavier items leading teams to de-prioritise.
Traditional frameworks aren’t immune to this but I’d love to hear more about whether you had to manually autocorrect some priorities in your experience of using this model. If so, any tips?