How to Design Experiments That Actually Work
Last week we looked at why most corporate experiments fail in a cultural context.
This week, let's get into something more concrete: how to design an experiment that gives you answers you can actually trust.
Step 1: Start with 'What could kill this idea?'
The best innovation teams don't start a test by proving they're right. They start by trying to prove they're wrong.
Every new idea rests on a set of core assumptions. Some are obvious (users want this), some are hidden (users will switch from their current solution), and some are deadly(users will pay more than they currently do).
Most new ideas don't fail because the team executed poorly, but because they built on faulty assumptions they never properly tested.
Here's how to do assumption mapping right:
A bank we worked with learned this the hard way. They'd spent £2M developing a beautiful app with amazing UX. Their assumption map had 47 items focused on user experience, technical feasibility, and go-to-market.
But they missed the killer assumption: Their entire business model required merchants to pay 3.5% transaction fees in a market where Square and Stripe charge 1.5%.
No amount of UX would fix that fundamental business model problem. They should have tested pricing acceptance before writing a single line of code.
Here's how you can pressure test your own assumptions:
Step 2: Turn assumptions into testable questions
If your hypothesis can't be wrong, you can't learn anything. But we see tonnes of hypotheses from corporate innovation teams that are written specifically to avoid being proven wrong.
Cast your mind back to high school science class (shudders); a good hypothesis needs four key components:
Here's how you write hypotheses that actually work:
A parma client showed us their hypothesis at the start of a project last year: "Users will find our new portal engaging and valuable." Reasonable - but it's impossible to prove wrong.
We rewrote it: "If we launch this portal, at least 60% of doctors will log in weekly and complete a minimum of 3 patient updates per session within the first 30 days."
?When only 12% of doctors logged in weekly, we knew we had to pivot.
The key is to make your hypothesis uncomfortable. It should be specific enough that you're a bit nervous about testing it.
Step 3: Focus on What Matters Most
You should hope your early experiments fail. Because if you're going to fail, you want to fail before you've invested heavily.
The fundamental framework to remember here is impact vs uncertainty:
Here's how you prioritise your riskiest assumptions:
We worked with a team recently who spent six months testing different UI layouts for their new investing platform. But at the same time, they hadn't validated whether their ideal customer profile would actually trust their brand with their savings.
When we finally tested trust signals with real users, only 2% said they'd consider moving money to an unknown platform. That's six months of UI work wasted because they tested the wrong risk first.?
Try this instead:
Step 4: Choose the Right Test for the Right Question
The validity of your results depends entirely on matching your method to your question. It's scientific method 101, but it gets ignored constantly in corporate innovation.?
The theory breaks testing into three levels:
How to select the right test:
A major retailer came to us excited about survey results showing 85% purchase intent for their new subscription service. We ran a fake door test instead - adding a "Sign Up Now" button to their website that led to a waitlist.
?Less than 0.1% of people who saw the real price actually clicked. That's the difference between what people say and what they do.
You've got to match your method to your question:
Step 5: Define Success Before You Start
Teams without pre-defined success criteria almost always find a way to interpret their results as success. It's not deliberate deception - it's human nature.
But success criteria exist to help us force decisions, not justify them. They should be:
Here's how to set proper success criteria:
A fintech client recently showed us their experiment results. Users loved their prototype! Engagement was high! But they still couldn't decide whether to carry on with the project because they hadn't defined what success looked like. When we took a deeper look, their customer acquisition cost would need to be under £40 to make their model work. Their current cost was £180.
Try this framework:
What This Means For You
Most corporate experiments fail in design, before they ever launch. But good design isn't complicated - it's just uncomfortable. It forces you to be specific about what could kill your idea and honest about what success really requires.
Take your current biggest project. Write down:
Next week: How to actually run these experiments without politics, bias, or wishful thinking getting in the way.
Want help pressure-testing your experiment design? Reply to this email or grab 15 minutes with us?here.