Why aren't your experiments working?
Last week we covered how to design better experiments.?
This week, let's tackle something equally important: executing them without letting politics, bias, or wishful thinking corrupt your results.
Get buy-in before you begin
The quality of your experiment matters less than your stakeholders' belief in the results.?
As humans, we're hard-wired to reject evidence that challenges our existing beliefs. For corporate innovators, this means your stakeholders will always find ways to dismiss test results they don't like - unless you've involved them from the start.?
Here's how to get better at bringing them on-side:?
A retail client of ours learned this the hard way. Their experiment showed clear evidence a new concept wouldn't work. But because they hadn't agreed with their sponsor on what "wouldn't work" meant before starting, the results kicked off months of debate instead of decisive action.?
When we ran the next test, we started differently:
When the data came in below the threshold (again!), the project was killed in one meeting. No debates, no politics.
Build the minimum viable test
The biggest mistake we see teams making in executing their experiments is building more than they need to answer their question. And it's not just wasteful - it actively damages your results.?
Adding in more complexity just increases variables, variables increase noise, and noise makes it harder to spot real signals. But we see teams consistently over-building their tests.
?Here's how to strip things right back and build the minimum viable test:
A fintech client of ours wanted to test if users would trust AI for investment advice. Their initial plan was to build a full robo-advisor platform at a cost of £400,000 over 6 months.
We stripped it right back:
It took us two weeks. And we learned users wouldn't trust AI with their money before building anything.
Key questions for your test setup:
Make it real
You need to make your tests real enough to get genuine responses but controlled enough to get clean data.
Most teams get this backwards. They either make their test so "experimental" that users don't behave naturally or so polished that they can't isolate what's working.
Here's how you need to think about this:
We helped a healthcare company test a new patient monitoring service. But instead of building the tech, we:
Three weeks later, we knew exactly how patients would use the service - before writing a line of code.
Deployment checklist:
Watch and learn
The most valuable insights come from watching experiments unfold in real time. But you need to know what to watch for and how to adjust without invalidating your results.
The principle is simple: monitor enough to spot problems and opportunities, but not so much that you're tempted to interfere unnecessarily and disturb the test.
Here's what you need to do to monitor your experiments more effectively:
A media client was testing a new subscription model with us and during monitoring, we spotted something odd: a massive variance in conversion rates at different times.?
Here's what we saw:
We adjusted the test to explore these patterns without compromising the core experiment.
Here's what you need to look out for:
Capture what matters
The key thing to remember in data collection is you can't go back and get what you didn't capture. But if you collect too much, you'll drown in noise. The goal isn't to collect everything. It's to collect the specific data that could prove you wrong.
Data collection framework:
An e-commerce client we worked with last year was testing a new checkout flow for a subset of users. But instead of just tracking conversion rates, we captured:
When conversions were lower than expected, we had everything needed to understand why.?
This is what you need to do to avoid the same mistakes:
What This Means For You
Good execution is?about intentionality. Every choice in your experiment setup, deployment, and monitoring should tie back to answering your core question.
Before your next experiment:
Next week: How to analyse experiment results and turn them into decisions that stick.
?Want help pressure-testing your experiment execution? Grab 15 minutes with me here.