Measuring outcomes with data
Image: Data illustrations by Storyset

Measuring outcomes with data

As Product Managers working in the tech community its hard not to look around at all the amazing organisations out there doing great things, and look at your own internal practices and feel like they are not mature enough, particularly when it comes to data informed decision making. In fact recently this came up during a mentoring session, where the mentee felt they were disadvantaged in their job search due to their organisation’s lack of maturity working with data.

Spoiler alert; pretty much every PM I’ve ever come across feels a little self conscious about their data practices and that they should be doing more to reach a higher level of maturity.

Data informed decision making comes in lots of forms and can include any data that is used to make any decision. I’ve seen data informed decision making using 1st party data (collected by your own company) and 3rd party data (collected by someone else), qualitative data and quantitative data. Some examples of these kinds of data sources could be;

  • industry, market or user segment research,
  • customer surveys,
  • tracking customer interest in feature ideas,
  • reporting on in-product behaviour tracking,
  • sales and revenue statistics,
  • and so much more.

Data can be used to inform decision making for initiatives at all stages of the product lifecycle, including: validating and prioritising opportunities to pursue, prioritising solutions based on expected reach, outcomes and impacts, measuring the success of work, and identifying friction points where opportunities exist to improve an experience.

Sometimes we feel that we can’t implement data practices due to organisational limitations, approvals, budgets etc. These challenges are real, but there are also often low cost, low effort things that we do have the power to implement at a team level which can get us started.

Like anything, reaching maturity takes time and you get there with practice, with repetition and building success on top of both success and failure, with learning. The most important thing to do is start taking these steps, even if they are small ones. So look at whats within your control, and whats within your ability to influence, build allies if needed and have a go.

Once started we can learn, adjust and build on our progress with more advanced data uses. As we successful implement change at a team level, we can share these learnings with others and inspire them to take similar actions, and slowly drive change across the entire organisation.

Today I thought I’d share some of my own experiences in using usage metrics to track product outcomes, measure success and make decisions. I particularly wanted to highlight how these practices evolved over the first year of a new product's life.

Measuring what matters

In Practicing outcome focused development I shared an example of using a hypothesis statement to define a desired outcome, including defining methodology to test if an outcome had been realised.

This approach helps to identify the metrics that matter to each piece of work being done. When defining how we we measure something its important to consider the behaviour change a feature is trying to drive towards, and where it fits into the wider objective.

In my recent work at Service NSW my team initially launched an internal product which helped our teams identify the owner of a problem space and connect with them quickly. This supported an organisational change from communication being routed through senior leaders, and facilitated more team to team communication. This early work addressed just some of the friction points in the internal journey of getting gov-tech products created to benefit the people of NSW state.

At such an early stage of our work, and with an internal audience we had decided to chose a low/no code solution to create our first product experience, test the waters and confirm the work was worth further investment. With this tech choice we were able to build and release the first version quickly, but it came with the trade-off that we had limited usage metrics available.

While we would of course have been interested in seeing how teams were interacting with various parts of the product, our main focus was in confirming that users were getting value from this first release, and that it helped them in their day to day work. After assessing the metrics we did have available we identified that we could calculate returning users, and decided to use this as the main measure of success.

Users do not come back to products which they are not getting value from, so even though the data couldn’t tell us exactly what value our users were getting, or from what features, the returning users data point did tell us they were getting value. We accepted this limitation and, since our user base was internal, decided that we’d have enough access to them to use other qualitative methods to identify the rest of the insights we’d need.

Getting your hands on data

As mentioned above in-built data was limited because of the tech we’d chosen to use. But we were also unable to plug additional ‘proper’ data tools in due to our licensing and organisational structure which would have required significant approval processes to enable these kinds of plug-ins. We also weren't able to access the data we did have available via an automated feed without engineering effort.

What we did have was an in-built page with unique visitors and visits at the site level as well as for pages. We were able to see popular pages, time on site, and usage by device type. We were able to see this as live data at the point in time the page was being viewed, and we were able to access a download of 90 days worth of data.

As the PM for the team, it was my responsibility to ensure the success of our work was measured, and without an engineer or data specialist at the time it was my responsibility to define a process for doing this. I’m a big believer in pragmatism in all things, so I worked with what we were able to access at first. This ‘at first’ state involved a calendar reminder to capture metrics on a weekly and monthly basis. Each Monday morning I logged into the internal product, downloading the data, and added it to a tracking spreadsheet. I set up simple formulas to help me compare total visits against unique visitors to see the returning user rates on both a week-on-week and month-on-month basis.

Was this manual approach ideal, no of course not. But, it was ok in the short term and did the job of allowing us to measure success. It helped us learn what our reporting requirements were, and I knew that once we were a bit further into the work and had additional staff join our team with additional skills, we’d be able to find a way to automate the process.

Turning metrics into useful insights

Data is a collection of points that individually don’t tell us much. A finding might tell us trends across multiple data points about what is happening, but not why. An insight adds the context and provides the why. It might even tell you what to do next eg you’ve achieved your goal so set another higher one or move on to something else, do more as you are on the right track but still haven’t met your goal, make a pivot as that thing you just did had a negative impact. Data without insights is not much use in a product world.

"Data refers to unanalyzed user observations, findings capture patterns among data points, and insights are the actionable opportunities based on research and business goals." Neilson Norman Group https://www.nngroup.com/articles/data-findings-insights-differences/

As we tracked the returning user rate of our internal product we were able to see past the initial burst in traffic the product received when we first released it. We were able to take the response we received to our internal promotional comms, and we were able to see how that translated into real, ongoing engagement.

Yes we got a surprising amount of traffic given our divisional size, but perhaps they were just curious to see what the product was all about, never to return again? Yes we received some lovely comments, but perhaps since our users were internal they just being kind and supportive? Would these positive early signs translate into real value for our users and the organisation? Had we actually delivered on our hypothesis and met the needs of our internal users? Could we deliver on additional hypothesis and increase the impact we were having, leading to real organisational change?

By tracking returning users we were able to benchmark what normal looked like for us, week-on-week, month-on-month for the first year post launch. We were able to verify that a significant portion of our division was returning regularly to the site, and with qualitative methods we were able to identify what tasks the product was helping them with and how this was saving them time, reducing frustration, risk, and duplication. These insights became extremely valuable as we began calculating return-on-investment (ROI) for our work - more to come on that in my next article.

As we released additional features we were able to see that our users returned even more frequently to our product, and as we promoted the product to other teams who we collaborated closely with, we were able to break down silos and bring transparency to our work.

One hypothesis that we developed was around users navigation behaviour. Based on user interviews we had identified that some users would start at A to find B, and other users would start with B to find A. We thought the split would be roughly 50/50, but it ended up being closer to 30/70. This helped us see which product areas were of more interest, invest in the right areas, and identify where there was room for improvement with information architecture etc. We still needed to validate the insights we thought we were seeing from the data, but combining qualitative and quantitative methods created stronger insights.

Evolution to reach maturity

The forced constraints of having only a few data points available to us, and needing to access it manually on a regular basis to get value from it, forced me to look at the data closely, regularly, and creatively. Without this constraint it might have been easy to get lost in a lot of competing data points, or get distracted by vanity metrics, rather than having the focus to look at the most important metric for us - returning users weekly/monthly.

As we introduced new pages we started looking at page level metrics, tracking them for individual performance and categorising them into clusters of pages so that we could compare the performance of similar page types.

I developed a one-page template for capturing and sharing usage metrics on a weekly basis with our stakeholder group and used this as a opportunity to share the outcomes of new features introduced. This helped me stay connected with them, work transparently, show impact, and build trust as we released approximately 20 times in the first year.

Nearly 12 months later, and we are finally investing in automated reporting. We are going into this automation work confident that we’re tracking the correct metrics, and that we have a process for using the data that works. We are confident that its worth our time to invest in automation, that we’re again approaching the work in a pragmatic way using tools which make sense for our effort vs impact equation, reducing wastage and allowing for future rework if our needs evolve.

Learnings

A little bit of data used well is better than a lot without a clear purpose. Like product development itself its beneficial to apply lean, agile thinking, starting with the problem and identifying the smallest possible solution to get us started.

The aim of using data is to help us make some decisions so we needed to focus on what would help us measure our work, then learn, iterate and take steps towards our ideal state. Data should help us see if something is good or bad, help identify and/or diagnose issues, help decide if action should be taken, and help determine if the action had an impact.

Things to consider include ease of access to data, frequency it needs to be updated/refreshed, and size and spread of the audience it will be shared with.


This article contains my own views and does not represent Service NSW. It is part of a series focusing on the creation of an internal platform promoting awareness and usage of repeatable patterns and re-usable components built by the Digital Service division of Service NSW and offered internally and across agency to support the creation of digitised government services for the people and businesses of NSW.

Absolutely! Embracing a lean approach to track outcomes is key to effective decision-making in product management. Your journey towards automation within the first year post-release is inspiring and underscores the importance of continuously evolving maturity in the process. Looking forward to learning more from your experiences!

要查看或添加评论,请登录

Erin Howell的更多文章

社区洞察

其他会员也浏览了