Are you measuring right? 4 reasons you’re probably not.
Salvatore Bocchetti
CPO & Partner @ Reasonable Product | ?? Boosting Product Pricing, Monetization & Product Leadership
The majority of the measures we see in #productmanagement are flawed, and the better they look, the worse they (usually) are. Why is it so, and 4 easy checks to fix them.
Whether setting up OKR, monitoring your “product health” or just running experiments, there are countless reasons why, as a product professional, you should measure what you do and set goals. And you should do this right!?
Being able to rely on metrics (actually, having meaningful and well-defined metrics) is your best ally to ensure your product is successful; but most importantly, it is a key element to understand when success is not hitting, and a mandatory tool to make (fast) fact-based decisions.?
This article is NOT a guide into the reasons why measuring your product performance, experiments, or “key results” is important. But hopefully, if you made it here, it’s clear why having metrics is the cornerstone of a successful product organization that relies on facts, rather than opinions. However, having a set of “whatever” metrics in place is by no means a guarantee per se, and in the next paragraphs, we will show some common pitfalls in dealing with measures and metrics.
A little spoiler: you should spend the majority of your time defining the proper measures, as much time as needed to get solid data, and little time debating on what the numbers you’ve seen mean. Note that I’m not throwing here exact numbers or ratios of time spent defying/collecting/analyzing on purpose. The reality is more complex than this: the exact values that work for your organization will evolve over time, but the overall idea is that solid anticipation of the “what” and “why” you’re measuring, will set a healthy basis for a consistent decision-making process down the line. If you’re far from this setup, chances are that your metrics are flawed and that you’re building the rest of your strategy on the sand. In the rest of this article, you will learn why.?
Here are the 4 most common reasons why success metrics don’t work, and how to fix them:
1) The issue with relative changes:
This is probably the biggest blindspot I see over and over in organizations that are just starting to become “data-driven”: defining relative (change) metrics instead of looking at absolute numbers.?
While in theory looking at relative improvements seems like a great idea (wow, we’re growing our revenue by 30% this year!), there are several reasons why you should look at absolute numbers instead. Of course, taking relative + absolute numbers together is a great way to look at “the whole story”. The main point I want to make here is that relatively express changes, taken alone, are at the root of many problems with your products.?
First of all, what do I mean by “relative change metrics”: note the word change; strictly speaking, a relative metric is a number dependent on other numbers. In this sense, a conversion rate is a relative measure (eg. 5% of our users entering the funnel make it to the end and finalize the purchase); nothing wrong with it, and this is not our concern here. We’re instead looking at those relative numbers used to express a goal (or an expected result) as a change (increase, decrease…) in a metric: like in “Increase our traffic 10%”, or “Reduce our churn rate by 20%”.?
Let’s be clear, there is nothing wrong with increasing our traffic by 10% or reducing our churn rate by 20%. However, there are several reasons why you cannot rely on a relative expression of your performance:??
All in all, while relative indications can tell a part of the story, I always strongly advise looking at absolute numbers (first), and phrasing your metrics as a mix of both absolute and relative measures (ideally): eg. “Increase your organic search traffic by 10%, from 100K to 110K unique users/month”. You can (and probably should) look at relative numbers, but you definitely cannot call it a day until you’re solid on the absolute numbers: baselines, additions, and targets.
2) The issue with the number of metrics:
Let’s make it easy: less is more. There are tons of reasons why your business is complex, why you can’t summarize all that you’re doing in one number, and why you would need a full dashboard of numbers to understand your product. Yet, there are many more reasons why you should keep the number of metrics you (really) look at very limited:
Unfortunately, “simplicity” and reducing the number of metrics can often lead to another undesired effect: creating Frankenstein and other fake metrics.
3) The issue with “getting the next best metric” or Frankenstein metrics (aka “fake metrics”)
Let’s say we made it to a limited number of metrics, ideally all measurable and expressed as absolute numbers. Wow, the worst is behind us! Still, you want to be attentive to a few categories of what I call “fake” metrics, that will give you this good sense of control. Until the moment they don’t do it anymore.
Frankenstein metrics: We’ve already seen how having too many metrics is not really helping. So, what do teams usually do in these cases? Well, a concept we all know too well in computer science is “compressing” information. In other words, if we still have the feeling that all this information is important, yet we want to reduce the number of metrics, many teams just “fill” the “outputs” of different metrics in one complex indicator. Is this helping? Well, NO!! For instance, one of my product teams once needed to look into a complex optimization problem, including some “trade-offs”. It was something like “let’s improve X. But because we don’t know how to measure it, we will measure Y, W, and Z instead and say that our metric is (Y+W)*Z.?
I call these “Frankenstein metrics” because they’re nothing else than a complex meaningless mix of heterogeneous measures.
Disconnected proxies: In other cases, we’re interested in behaviors of our product that are just too difficult to single out and measure. In these cases, I’ve seen many teams coming up with “proxy metrics” that are indeed simple but… so far from the original problem and assumption, to the point that they eventually bring no information about the impact we’re trying to make. I call these “disconnected proxies” because the metric we’re using is kind of disconnected from the original problem.
领英推荐
Diluted proxies: similarly, some metrics are difficult to measure directly, so we take the closest “higher metric” we find. In other words, we try to measure something so large that cannot single out the real driver and contributors to any changes. I believe we’re all so familiar with a reasoning going like this: “our goal is increasing revenue. So, let’s make this feature to do X, which shall contribute positively to revenue”. It will probably do, and it’s probably the right thing to do. Nevertheless, will we really be able to directly correlate feature X with revenue? Or will its positive contribution be “diluted” in seasonal fluctuations? Are we going “too high” in the food chain, measuring directly the revenue?
I call these “diluted proxies” because looking at the metric in a “too broad” scope will just dilute the contribution to the metric that is under our control.
4) Ownership & Rituals
Last but not least, now that you have a solid set of metrics defined, remember that this is not a? “fire and forget” exercise.??
Conclusions
By now it should be clear what I meant at the beginning by “spend more time on defining the right metrics than reading them”. Simplicity doesn’t mean “simple to put in place”, but simple to use and understand. If you did a proper job in the definition and you didn’t come up with “fake metrics”, understanding what numbers are indicating is straightforward. Conversely, if you spent little time in preparation, got metrics that seem ok but in reality are impossible to obtain, or are using metrics that are too far from what you’re moving, you will be spending a lot of time debating on what the metric is telling you. And you will be spending this time, instead of deciding what is next and iterating faster.?
So, when the time comes to set up your metrics, remember a few key points:
I think somebody once said: “Metrics are like a joke. If you have to explain them, they’re not funny”. Or probably he didn’t say metrics, but it doesn't matter :-)?
Before you go. I like to practice what I preach, and with this article, I am starting to ask my readers for a very precious contribution: their honest feedback! Could you spend 2’ of your time answering a couple of very simple questions here? https://forms.gle/WXQx7XCiHFjtsS3u7??. Thanks !
Did you like this article? Find other stories, reports, and analyses about #productmanagement at https://salvabocchetti.com/articles
A big thanks to Patrick Hauert and Mattia Albergante for challenging many of my thoughts, for their attentive review of my drafts, and for the countless suggestions for this article.?
About myself:?
My name is Salva, and I help tech companies discover, shape, and sell better Products, and have been doing Product since before we called it so. With Digital and Cybersecurity as preferred playgrounds.
My superpower is to move between ambiguity (as in creativity, innovation, opportunity, and ‘thinking out of the box’) and structure (as in ‘getting things done’ and getting real impact).
I am firmly convinced that you can help others only if you have lived the same challenges: I have been lucky enough to practice product leadership in companies of different sizes and with different product maturity. Doing product right is hard: I felt the pain myself and developed my own methods to get to efficient product teams that produce meaningful work.
Upgrading Product Management Co-Founder at ValueRebels, CPO, Head of Product, Product Manager
1 年This is a fantastic read Salvatore Bocchetti. I have experienced all pitfalls that you describe and it's great to have that written down. If measuring is done wrongly it can be even more harmful than when you don't measure at all.
Use Text-AI for manufacturing shiftlogs to identify downtime reasons and root-causes. Access the knowledge inside your shiftbook/MES-logs/maintenance-log comments.
1 年I call that boiling the ocean. If you don't have a hypothesis in mind, it's hard to make sense of the data.