How to Design Experiments That Actually Work

How to Design Experiments That Actually Work

Last week we looked at why most corporate experiments fail in a cultural context.

This week, let's get into something more concrete: how to design an experiment that gives you answers you can actually trust.


Step 1: Start with 'What could kill this idea?'

When one wrong assumption brings down your entire project...

The best innovation teams don't start a test by proving they're right. They start by trying to prove they're wrong.

Every new idea rests on a set of core assumptions. Some are obvious (users want this), some are hidden (users will switch from their current solution), and some are deadly(users will pay more than they currently do).

Most new ideas don't fail because the team executed poorly, but because they built on faulty assumptions they never properly tested.

Here's how to do assumption mapping right:

  1. List every "must be true" for your idea to work
  2. Break each assumption into testable pieces
  3. Rank them by how critical AND uncertain they are
  4. Look especially hard for hidden assumptions about user behaviour and economics

A bank we worked with learned this the hard way. They'd spent £2M developing a beautiful app with amazing UX. Their assumption map had 47 items focused on user experience, technical feasibility, and go-to-market.

But they missed the killer assumption: Their entire business model required merchants to pay 3.5% transaction fees in a market where Square and Stripe charge 1.5%.

No amount of UX would fix that fundamental business model problem. They should have tested pricing acceptance before writing a single line of code.

Here's how you can pressure test your own assumptions:

  • What needs to be true about your users that isn't proven?
  • What market conditions must exist for you to succeed?
  • What economic assumptions underpin your model?
  • What behaviours must change for this to work?


Step 2: Turn assumptions into testable questions

When someone asks you to write a hypothesis that can actually be proven wrong...

If your hypothesis can't be wrong, you can't learn anything. But we see tonnes of hypotheses from corporate innovation teams that are written specifically to avoid being proven wrong.

Cast your mind back to high school science class (shudders); a good hypothesis needs four key components:

  1. A specific prediction
  2. A measurable outcome
  3. A defined timeframe
  4. A clear threshold for success/failure

Here's how you write hypotheses that actually work:

  • Start with your riskiest assumption
  • Define exactly what would prove it wrong
  • Make it specific enough to measure
  • Add a clear timeline and success threshold
  • Test it with "Could this be proven false?"

A parma client showed us their hypothesis at the start of a project last year: "Users will find our new portal engaging and valuable." Reasonable - but it's impossible to prove wrong.

We rewrote it: "If we launch this portal, at least 60% of doctors will log in weekly and complete a minimum of 3 patient updates per session within the first 30 days."

?When only 12% of doctors logged in weekly, we knew we had to pivot.

The key is to make your hypothesis uncomfortable. It should be specific enough that you're a bit nervous about testing it.

  • Instead of "users will like it" → "X% will pay Y amount within Z days"
  • Instead of "it will improve efficiency" → "Teams will reduce process time by X hours"
  • Instead of "people will use it" → "X% will switch from their current solution within Y weeks"


Step 3: Focus on What Matters Most

When you realise you've been testing the wrong risks...

You should hope your early experiments fail. Because if you're going to fail, you want to fail before you've invested heavily.

The fundamental framework to remember here is impact vs uncertainty:

  • High impact + High uncertainty = Test first
  • High impact + Low uncertainty = Validate assumptions
  • Low impact + High uncertainty = Monitor
  • Low impact + Low uncertainty = Ignore

Here's how you prioritise your riskiest assumptions:

  1. Map all of them on impact/uncertainty axes
  2. Look for dependencies between risks
  3. Find risks that invalidate others
  4. Start with the fastest/cheapest tests that could kill the project

We worked with a team recently who spent six months testing different UI layouts for their new investing platform. But at the same time, they hadn't validated whether their ideal customer profile would actually trust their brand with their savings.

When we finally tested trust signals with real users, only 2% said they'd consider moving money to an unknown platform. That's six months of UI work wasted because they tested the wrong risk first.?

Try this instead:

  • List your top 5 risks
  • For each, ask "If this fails, do the others matter?"
  • Start with the ones where failure makes other tests pointless
  • Look especially hard at market and business model risks


Step 4: Choose the Right Test for the Right Question

Your experiment design vs the question you're trying to answer...

The validity of your results depends entirely on matching your method to your question. It's scientific method 101, but it gets ignored constantly in corporate innovation.?

The theory breaks testing into three levels:

  • Signal tests (Are we onto something?)
  • Behaviour tests (Will people actually do it?)
  • Commitment tests (Will they pay/commit?)

How to select the right test:

  1. Identify what type of evidence you need
  2. Choose the lowest-effort test that could give that evidence
  3. Design for real behaviour, not stated intentions
  4. Build in clear success/failure criteria

A major retailer came to us excited about survey results showing 85% purchase intent for their new subscription service. We ran a fake door test instead - adding a "Sign Up Now" button to their website that led to a waitlist.

?Less than 0.1% of people who saw the real price actually clicked. That's the difference between what people say and what they do.

You've got to match your method to your question:

  • Testing pricing? → Create a real purchase flow that stops at payment
  • Testing features? → Build a waiting list that requires meaningful commitment
  • Testing behaviour change? → Run a manual service behind a digital front-end
  • Testing market size? → Use ad campaigns to measure real interest


Step 5: Define Success Before You Start

Every innovation meeting without pre-defined success criteria...?

Teams without pre-defined success criteria almost always find a way to interpret their results as success. It's not deliberate deception - it's human nature.

But success criteria exist to help us force decisions, not justify them. They should be:

  • Defined before testing starts
  • Tied to real business requirements
  • Specific enough to force a clear decision
  • Hard enough to matter

Here's how to set proper success criteria:

  1. Start with your business constraints (CAC, margins, etc.)
  2. Work backwards to required metrics
  3. Set thresholds that force decisions
  4. Write them down and share them before testing

A fintech client recently showed us their experiment results. Users loved their prototype! Engagement was high! But they still couldn't decide whether to carry on with the project because they hadn't defined what success looked like. When we took a deeper look, their customer acquisition cost would need to be under £40 to make their model work. Their current cost was £180.

Try this framework:

  • What metrics must you hit for the business to work?
  • What user behaviours must you see?
  • What would make this an obvious yes/no?
  • What result would force you to kill the project?

What This Means For You

Most corporate experiments fail in design, before they ever launch. But good design isn't complicated - it's just uncomfortable. It forces you to be specific about what could kill your idea and honest about what success really requires.

Take your current biggest project. Write down:

  • The one assumption that would kill everything else if wrong
  • A hypothesis so specific it scares you
  • A success criteria in hard numbers

Next week: How to actually run these experiments without politics, bias, or wishful thinking getting in the way.

Want help pressure-testing your experiment design? Reply to this email or grab 15 minutes with us?here.


要查看或添加评论,请登录

Future Foundry的更多文章