DVF: The (Obvious) Answer To My Faith Crisis In Prioritisation & Other Traditional Frameworks

DVF: The (Obvious) Answer To My Faith Crisis In Prioritisation & Other Traditional Frameworks

Last year, I attended a product meetup hosted by the wonderful Adrienne Tan and her incredible team at Brainmates . There, I shared that I felt like I was going through a "product midlife crisis" of sorts. That my faith in some of the traditional frameworks that we'd propped up for so many years simply weren't as effective in responding to today's challenges the way they used to.

Afterwards, a number of other product managers reached out to me expressing similar feelings; either in direct conversation or LinkedIn messages. Collectively, this group had become exhausted by the idea that there was "one right way" to get to the epiphany, and we tended to switch off to any rhetoric suggesting there was.

We didn't consider ourselves the Freedom Fighters of product management. We just weren't blindly leaning into every book and framework coming out of Silicon Valley the way we might've once upon a time.

Fast-forward a little, and here's where I'm at on the whole subject -

  • The right framework to use in discovery is the one that gets you to the epiphany quickly and without bias. It may be one that already exist, or a derivative, or something totally new authored by your team;
  • The right way of working in development is the one that works for your business and people, but that could look like anything. And;
  • The right things to prioritise are the ones that are validated against desirability, viability and feasibility with positive returns.

Which brings me to the purpose of today's article, the sharing of a framework that I created (I'd call it more of a derivative framework than something created from scratch) which we use to prioritise our work at the NRL, and which I shared recently in a forum that has gathered traction.

It's used to help us with prioritisation and is based on a principle we tend to cover off early in "product management school". I call it The DVF Matrix. In the spirit of the above, I would recommend it only in the instance where it works for your people and the pace of your operating business.

(But I think it can scale broadly for both)


Whenever someone new joins my team, we always cover off DVF in our orientation. DVF is something that I have absolutely not soured on. If anything, my observations throughout the product community is that DVF is something we've possibly gone too far away from in the rush to find the newest frameworks/tooling that aids our work. In my own teams (dating back to my two previous roles), I'd become worried that we didn't leverage DVF principles early or often enough.

The only way to change that, I believed, was to bake it into parts of our everyday ways of working, hopefully enough so that my team considered DVF to be "home base" whenever they were presented with feature ideas or indeed came up with one themselves.

We needed to reset our product culture around DVF, and one of the obvious process candidates to help re-establish that was a recurring conversation that happened every other day: prioritisation.


The DVF Matrix is a tool that measures the merit of a feature based on pre-determined desirability, viability and feasibility bands. These bands are formed in collaboration with other key stakeholders in the business (Technology, Design, Marketing etc), making them co-owners of the prioritisation from the outset. Combined, these bands form an algorithmic view of what we call a DVF Score, which is the sum of the desirability and viability minus the cost of the effort. Or as my high school teacher would've written it on the blackboard:

(D+V) - F = DVF Score

Whilst not an exact science, this matrix provides a great starting point for prioritisation discussion and implores product managers to know their data.

The bands should be tailored to the needs and size of the business. Each top pillar (DVF) is broken out into two sub-metrics in which the bands take effect. The purpose of the sub-metrics is to ask the question of "how do we actually measure desirability, viability and feasibility?"

Below are the sub-metrics used in the matrix.

Top Pillar: Desirability (Do customers want it?)

Sub Metrics:

  • Customer Value: How much of an impact will this make on the customer experience?
  • Reach Value: How many users will actually see benefit from this feature? (in collaboration with Analytics)

Customer value is one of two sub-metrics in the matrix that is subjective to the gut feel of the team. I want to be clear about something at this point: The intent of the matrix is not to erase experience and instincts from the prioritisation process. I'm a data-led product person, but my general view on its place in the process is that quant data will tell me what people are doing, qualitative data will tell me why people are doing that, and experience and instincts will help me understand the starting points of the discovery that will happen out of the above. The matrix isn't designed to fully remove instinctual feels, but to balance it out against hard and fast data. When the two are aligned, we are off!

Reach value refers to the number of users that will actually see benefit from the feature that is going to be built. Ever been in a meeting in which an idea was pitched that you know is going to be seen by a tiny minority? This will help curb that. Occasionally, there will be times when we will need to build for a minority, but that should be the exception to the rule.

Top Pillar: Viability (Will the business support it?)

Sub Metrics:

  • Commercial Value: How much revenue will this feature be accountable for? (in collaboration with Commercial/Partnerships)
  • Strategic Value: Are there non-monetary business objectives that this feature will contribute to? (in collaboration with Marketing)

Commercial value is relatively straight forward. In it we are referring to the amount of revenue (sponsorship, advertising, contra) that is on the line for the feature. Bands here should be adjusted to the size and goals of the business.

But not all value is monetary. "Increasing our email capture" or "stabilising and scaling our tech stack" are both good examples of things that aren't directly related to the customer or revenue, but they indirectly influence them. That's where strategic value comes into play. It is the second metric in which the team should invite its experience and instincts into the conversation.

Top Pillar: Feasibility (Can we build it?)

Sub Metrics:

  • Design Effort: How long will it take us to design this feature? (owned by UX and UI)
  • Development Effort: How long will it take us to build this feature? (owned by Technology)

Design effort and development effort scores are high-level estimates to achieve the high-level feature pitch we present them. My personal view and mode of operation here is that if you are taking fully fleshed out stories to your teams to estimate on instead of an Opportunity Canvas (which we use in lieu of the traditional Lean Product Canvas), then you may have engaged your stakeholders too late.

I also have a strict rule when it comes to these two sub-metrics: Product are not invited to score the effort sub-metrics. This portion of the matrix is a no-fly zone for product!

Few things are more grating for a designer and developer to hear than how long a product person thinks it should take to design and build a feature. If you've ever scrolled through the comments section of a product post or tweet, you'll quickly learn that as a group we are amongst the most insufferable know-alls in the creative discipline.

The effort portion of the matrix is owned by Design and Technology. We need to trust our stakeholders own assessment of the work to be done, and we should have a culture of backing in their instincts. That's not to say we can't ask questions about things that aren't clear to us in their scoring, but if we are operating as a cross-functional unit the way we should be, you won't need to.

So really, that algorithmic view looks more like this when broken out:


If you were to think about feature building in terms of bets (shoutouts to Annie Duke), the DVF Matrix is ultimately designed to help teams understand the size of the wins (value) against the cost of our bet (effort) using metrics that are best-placed to measure that risk/reward.

A couple of nuances with the matrix:

  1. You may well work in a revenue-hungry business. In that instance, it would be perfectly acceptable to tweak the algorithm so that different sub-metrics have different weightings. At the NRL, for example, we have the Commercial Value weighted x1.5 times the amount of the Reach Value because it's really important to us. One of the positives to the tool is that you can shift the weighting around according to the needs/goals of your business. Harness the flexibility of it!
  2. At the NRL, we use the DVF Matrix as more of a conversational starting point than we do the Rosetta Stone of prioritisation. Our product managers each put in their own scores (in collaboration with those stakeholders) and we refer to the matrix as a group in an offline conversation prior to our fortnightly spring planning session. The DVF Matrix is best viewed and used as a compass rather than a set run sheet of delivery. There have been times when we have moved the order around based on deadlines and resources.


Looping back to my first comment: I don't encourage everyone to use this tool exactly as I've spelled it out (although if it works for you, great!). The right prioritisation tool to use is the one that gets you to the epiphany in the fastest, clearest and most stakeholder aligned way. That could be the DVF Matrix or something completely different.

Like the DVF Matrix, it's entirely possible the right way for your team is buried in your own head.

I love that you took the time to consider your team's, company's needs and created something that is contextual and usable

要查看或添加评论,请登录

社区洞察

其他会员也浏览了