Meta-analysis: Do I have to run my test yet again?
Rommil Santiago
Senior Director, Product Experimentation at Constant Contact | Founder of Experiment Nation
Originally sent to Experiment Nation's newsletter subscribers on April 9, 2022.
---
Recently, there’s been a bit of?buzz within the experimentation community around the concept of meta-analysis. For those unfamiliar with the concept, in a nutshell, it’s about looking at similar experiments altogether to pull insights. In fact, meta-analysis is actually considered the most trustworthy type of analysis out there. That said, it’s still prone to things like Simpson’s Paradox (where looking at a segment of an audience produces different results as compared to the whole), selection bias around which experiments you pool together (e.g. from convenience sampling where you only pool studies that are easy to find), statistical approaches (e.g. did the researcher p-hack?), and pooling studies that don’t actually answer the same hypothesis.
I won’t go into the nitty-gritty of performing a meta-analysis, but at the same time, I did want to address the question: If I ran a test in Italy, and again in Germany, do I have to run it yet again in Canada? There are generally two schools of thought:?
We could go back and forth on which approach is right. Personally, I prefer to approach such problems from a practical perspective. There is a cost to running (or not running) a test. Either you spend the money to set up, launch, and analyze the test (and potentially incur an opportunity cost); or you don’t run the test and see what happens. Admittedly, there is a third option: launch the change and not look at the impact at all – but that is fairly irresponsible. This highlights the importance of reducing the cost of learning. If you can learn very cheaply, then you have no reason to not run the test.
领英推荐
Another thing to consider is that it's important to have a way to measure whether the different audiences (or countries in this case) are similar. We do this at?Loblaw Digital, actually. When we run experiments that involve our physical stores, we look at a number of metrics to ensure that stores are indeed similar. However,?Loblaw Digital?benefits from the fact that it has this data on its stores and the areas they serve. However, this isn’t always the case – you often?don’t?have access to these metrics for different audiences. So that’s another thing to consider.
In the end, it comes down to how much exposure to risk you can tolerate.
Good luck and see you in 2 weeks!
---
If you liked this article, consider signing up for Experiment Nation's bi-weekly newsletter where you will receive advice and thoughts about Experimentation, Interviews with Experimenters from around the world, access to video sessions, Memes, and more! Sign-up here:?https://buff.ly/31qO0AQ
Chief Editor of GoodUI - Conversion Focused UI Designer
2 年Exploit vs Explore trade off. None are mutually exclusive. Both experiment types can be balanced. And then there is the case for institutional memory, and remembering already inferior experiment ideas. And active and ongoing meta analysis is a more accurate form of remembering and weighing. Without that, teams run the risk of repeating the same old mistakes (arguably inefficient). But yes, I agree that teams should also bake in space for highly exploratory experiments. (A cultural / process thing) Have been thinking visually about this a little here: https://www.dhirubhai.net/posts/jlinowski_experimentation-prioritization-design-activity-6945793114762616832-FrxW?utm_source=linkedin_share&utm_medium=android_app
2 decades of digital transformation with data, digital product analytics & experimentation
2 年Been doing meta analysis of the programmes I have run for about 15 years and it's highly valuable.... we do it as part of quarterly programme retros and use it to realign on test themes... (are we doing g enough of x and y.... have we capitalised on these other wins elsewhere etc)