Practicing outcome focused development
Outcome focused development is a product prioritisation and development practice which focuses on desired and achieved outcome, instead of output. Output focused development prioritises feature development, story points delivered and other metrics with little consideration for what these metrics deliver in value for users or impact for the organisation.
Over the last few years the industry has recognised that outcomes should be the focus, rather than outputs, however there are still a lot of organisations who are struggling to make the shift.
Over my career as a Product Manager I've experienced challenges to adopting Outcome focused practices. However, recently I've been experimenting with implementing the concept using a multi-layered approach which was fairly successful. I wanted to share these learnings in case they help other Product Managers on their own career journeys.
Outcome focused prioritisation and road mapping
Impact mapping or reverse impact mapping helps us think about the value of an idea or suggestion in terms of what kind of impact it could have and how this might lead to a desirable outcome. Items on a product roadmap should be linked to impacts/outcomes, otherwise they should not be prioritised.
However, roadmaps are often presented as a list of feature deliverables, and the desired outcome might live in the head of a PM somewhere instead of being explicitly stated. This isn't ideal as the desired outcome might be lost in communication at either the team or leadership levels.
If lost at the team level, members of the team may not factor in critical criteria for the success of the outcome and the deliverable might then be less impactful. When all team members understand the purpose and success criteria of a deliverable there is a much better chance of the solution achieving the desired impact successfully.
When leadership are unaware of the link between a deliverable and a desired outcome it leaves space for guesswork, assumptions and can erode trust. This could result in leaders who don't see the prioritisation logic behind a roadmap, and therefore are not able to understand how an idea they have just come up with compares to existing priorities. By making the connection between deliverable and outcomes very explicit it reassures leadership that deliverables are being prioritised according to business or customer goals and, in my experience, they become more trusting in the work of the team and are more understanding when their idea doesn't immediately jump to the top of the roadmap.
When I created our first roadmap for my team’s Service NSW initiative I had the flexibility to create the roadmap in any format I thought would be appropriate. Having recently read the above article on reverse impact mapping I decided to create a roadmap with ‘Impact goal’, ‘Customer persona’, ‘Customer outcome’, ‘Business outcome’, ‘Deliverable’. This is similar to recommendations of the article, and impact mapping templates available in Miro, but also calls out Customer and Business outcomes as two related but separate concepts.
An example of how this came to life is;
Each deliverable was linked to its context so anyone looking at the roadmap could clearly see and understand what outcomes it should lead to. I also ordered and colour coded items on the roadmap into Now, Next, Later bands so that it was easy to see what was in progress and coming soon vs longer term thinking.
Hypothesis driven development
At Service NSW we’re lucky enough to have a Product Management coach Sue Bolton (ex Netflix) who coaches our practice groups on a variety of PM and Agile skills. Shortly after joining the organisation I attended one of her workshop sessions on hypothesis driven development. This was a great practical session aimed at empowering our people to be comfortable in writing hypothesis statements. It was the trigger I needed to take my outcome focus to the next level and frame roadmap items as hypothesis statements.
A hypothesis approach frames the deliverable as something which is an experiment. This changes the mindset of all members in the team and makes it safe to propose things to try, pick one which we believe has the best chance to delivering the outcome, implement it, and monitor the results to know if it succeeded or failed. This acknowledgement of failure as a possibility shifted our team thinking to solutions which would allow us to test and validate or invalidate the deliverable as quickly as possible. This created a culture where failing fast was safe and framed as a learning opportunity to develop a better hypothesis next time.
The statement format we used is as follows:
We believe that [user persona / behavioural archetype] have a problem [problem description, job-to-be-done]. We can help them by [solution description]. We'll know that we're right if we see [research outcome, product success metric etc]. We will test with [testing method] for [period of time]
Using the roadmap item example above this looked like:
We believe that Digital Service practitioners have a problem finding which team is responsible for which product and knowing who on the team to reach out to about the product. This causes delays and frustrations on multiple levels. We can help them by creating a product directory which lists all the products built within the Digital Services group, and establishes an ownership relationship between the products and the teams. We’ll know that we’re right if we see positive feedback during testing, and at launch, as well as users returning to the solution multiple times throughout the month. We will test with visit and unique visitor metrics to calculate returning users rates for three months.
To embed this hypothesis driven approach into our team practices I configured our Jira issue template to populate the above templated statement into each new Jira ticket that was created. This meant we didn’t have to rely on memory or manually writing out all the details of the statement in each ticket, we just had to fill in the gaps to complete the statement. It reminded us to use the hypothesis statement when framing a piece of work, and helped us easily capture the desired outcome for each of our deliverables in a consistent format.
Outcome tracking
One of the challenges with outcome focused development is that once work has been implemented and released it falls off the radar and its easy to forget the ‘we will test with.. for…’ part of the hypothesis statement. So, I wanted to find a way to embed outcome tracking consistently into our scrum practices.
I considered a few approaches but decided to trial using a new status for our Jira tickets. When work was released the ticket moved from ‘In progress’ to the new ‘Impact tracking’ status where it remained on the board to be monitored. This meant the ticket could be counted as ‘done’ but also continue to be visible after the current sprint had closed and new sprints were opened. Once the monitoring had been carried out and the result of the experiment had been recorded, we could then move the ticket to ‘Closed’.
Because our original hypothesis statement clearly stated how we’d test and how we’d know if we were right, we had very clear criteria to measure against. These metrics usually centred around usage data we already had available, however occasionally we needed to track an additional data point or create a new formula or new chart using our existing data to identify what we needed.
Once we had captured the outcome of the hypothesis we could determine the success or failure of the work and decide if we were ready to move onto our next goal, insert additional deliverables into our future sprints to further deliver on the outcome we were aiming for, or pivot to a different solution.
To aid me in quickly producing a summary of hypothesis outcomes I have created some templated slides which allow me to summarise the hypothesis, relevant evidence of success/failure, and notes regarding results or next steps. This template allows me to painlessly create a visual report which can be shared with our sponsors and stakeholders to show the outcomes of our team’s work. Regularly sharing these with our stakeholders and sponsors means we can show progress towards our overall goals and build trust in both our day-to-day practices and the results of our work.
Outcomes vs Impacts
Outcomes and impacts often get used interchangeably, and until recently I hadn’t put much thought into their difference. I’ve recently begun to make a distinction after the above experimentation. I have started using ‘Outcome’ to measure the product level result of our work eg after releasing x feature we saw y users interacting with the feature and returning z times within a week/month. This indicates that users were/were not getting value from the feature.
I have been using Impact to describe the business value of the work eg in our user interviews we saw that prior to the feature being released it was taking users x time to find the right team contact and this caused frustration and discontent. Based on this qualitative insight and our quantitative user metrics we have calculated we have saved the organisation y hours with this feature, and reduced user frustration by z.
Using our team hours as a cost, and this business impact as a result we’ve been able to calculate a simple return on investment (ROI) to show the profitability of our work (more on that to come in a later article).
Outcomes of our outcome focused development experimentation
We took an experimental approach to implementing the concepts of outcome focused development and found that the multi-layered approach worked well;
The operational reinforcement of the practices also helped us successfully and consistently practice the concepts without too much effort to have to remember to do something special or different.
With repeated practice we developed the skills until they became natural and second nature.
This article contains my own views and does not represent Service NSW. It is part of a series focusing on the creation of an internal platform promoting awareness and usage of repeatable patterns and re-usable components built by the Digital Service division of Service NSW and offered internally and across agency to support the creation of digitised government services for the people and businesses of NSW.
Continuous Improvement Specialist
1 年Love this - reallllyyyy interested in your upcoming article on profitability and ROI ??