Stop your experiment results from being ignored

Stop your experiment results from being ignored

Over the last two weeks, we've covered how to design and execute better experiments. Now comes the part that most teams get wrong: turning messy results into clear decisions that stick.


Turn raw data into clear patterns

Every innovation team looking at their experiment data...

Here's why most experiment analysis fails: teams jump straight to conclusions before making sense of their raw data. It's like trying to read a book by randomly opening pages. But your first job isn't to find answers - it's to identify what data you can actually trust.

?How to process experiment data properly:

  1. Clean and structure raw data
  2. Remove contaminated data points
  3. Check for collection issues
  4. Normalise across sources

A major bank came to us confused about their latest experiment. Their initial analysis showed their new investing service was a huge success. But when we looked closer at the raw data:

  • 25% of users were actually internal team members
  • The test data had mixed with production data from staging
  • Some users appeared multiple times (in some instances, up to 18!)

After we processed it properly, the conversion rate dropped from 9% to 0.8%. The team avoided making a catastrophic investment decision based on bad data.

Here are some critical questions to ask yourself when looking at test results:

  1. Is your data actually measuring what you think it is?
  2. Have you identified and removed contaminated data?
  3. Are your data sources consistent and comparable?
  4. What might be skewing your results?


Find real signals in the noise

Looking for patterns in your experiment data...

Most teams fail at insights generation because they're looking for what they want to see. But the goal here isn't to prove yourself right - it's to identify what the data is actually telling you. As humans, we're good at finding patterns, even when they don't exist. That's why we need a structured approach to insight generation.

?Here's how you can generate more reliable insights:

  1. Go back to your hypothesis
  2. Look for evidence that could disprove it
  3. Identify unexpected patterns
  4. Challenge your assumptions

A retail client ran an experiment with us to test a new loyalty programme they'd designed. Their initial conclusion? Success! But our structured analysis revealed three fundamental flaws:?

  • The high engagement came from existing loyal customers
  • New customer acquisition actually decreased
  • Cost per acquisition doubled

The real insight was that they'd built a better experience for customers they already had while making it harder to attract new ones.

Here's what you need to do when analysing your experiment data:

  1. Look for what you didn't expect to see
  2. Pay close attention to segments and subgroups
  3. Question any "obvious" conclusions - twice
  4. Document contradictory evidence you find


Face the truth

When the data shows your hypothesis was completely wrong...?

The moment of truth in experiment analysis comes when you have to actually decide if your hypothesis was right or wrong. It sounds obvious, but this is where most teams falter because they'll go?to extraordinary lengths to avoid admitting their hypothesis was wrong.?They'll redefine success, cherry-pick data, or claim the test wasn't "real" enough.

Here's how you can evaluate your hypothesis more honestly:

  1. Return to your original hypothesis
  2. Compare results to your pre-defined success criteria
  3. Document all evidence, supporting and contradicting
  4. Make a clear call: confirmed, rejected, or inconclusive

A tech company we worked with wanted to test a new premium feature set with us. When we looked at the data, it showed:

  • Click-through was below threshold?
  • Conversion to paid customer was below target
  • But users told us they "loved" the concept

Their initial response was to focus on positive feedback, but we helped them face reality: their hypothesis about willingness to pay was wrong, regardless of how much users told us they "loved" it.

Here are some questions you need to ask yourself to avoid falling into the same trap:

  1. What did you specifically predict would happen?
  2. What actually happened?
  3. Where do the results differ from predictions?
  4. What would it take to change your conclusion?


Make the call

?Every innovation team when the data doesn't show what they hoped...

The hardest part of analysing your experiments isn't understanding the data - it's having the courage to make the decision the data demands. Most teams get stuck here because they treat decisions as final. But the reality is that good decisions are hypotheses about what to do next, not permanent commitments.

To make better decisions you need to:

  1. Start with your pre-committed criteria
  2. Evaluate against business requirements
  3. Consider the implications of implementation
  4. Define clear next steps

A healthcare company we worked with tested a new patient booking system with us last year. The results were mixed:

  • Technical feasibility was confirmed
  • User satisfaction was high
  • But cost per booking was 3x target

Instead of debating, they used their pre-committed criteria: anything over 2x target cost was an automatic no-go. Decision made, resources reallocated.

Here's some important principles you need to keep in mind:

  1. Stick to pre-committed criteria
  2. Make decisions at the right level
  3. Document your reasoning
  4. Plan immediate next steps


Tell the story

Presenting experiment results to stakeholders...?

The final step is often the most important: turning your analysis into a compelling story that drives action one way or another. Great analysis means nothing if you can't get stakeholders to understand and act on it. The key principle to remember here is that you need to structure your story around decisions, not data.?Start with what needs to happen next, then provide the evidence that supports that decision.?

Here's how we structure outcome reports at Future Foundry:

  1. Start with the decision
  2. Show key evidence that drove that decision
  3. Address likely objections
  4. Present clear next steps

A fintech team we worked with completely changed their reporting approach working with us. ?Instead of overwhelming their sponsors with 'here's all our data and analysis...", ?they tried leading with: 'We should kill this project because our three critical assumptions were wrong:'

  • Target users won't pay our minimum viable price
  • Customer acquisition costs are 4x our threshold
  • Technical integration is harder than expected

As a result, they got everyone aligned quickly and made a quick decision to kill the project before sinking more time and money into it.??

Here's some key questions to ask yourself when reporting back on your experiments:

  1. What decision needs to be made?
  2. What evidence supports it?
  3. What are the implications?
  4. What happens next?


What this means for you

Good analysis isn't about being right it's about being clear about what you've learned and what it means for your next move.?

Before your next analysis:

  1. Clean your data before drawing conclusions from it
  2. Look for the ?evidence that proves you wrong
  3. Stick to pre-committed evaluation criteria
  4. Structure findings around decisions rather than the data.?

Want help making sense of your experiment results? Grab 15 minutes with us here.

要查看或添加评论,请登录

Future Foundry的更多文章