Most tests don’t win. So how do you get the most from every single one

Most tests don’t win. So how do you get the most from every single one

“Avoided loss” is a term often used in experimentation that can get a bit of a bad rap. And honestly, it’s our own fault. Those of us who have been in the field for a while have at some point in our careers called out how we avoided losses in a way that has been naive at best.

I’m talking about the times where we’ve run a series of tests, confident that we’d see a nice little uplift somewhere, only to get nothing but red. That happens more often with optimisation programs where the only goal is improving conversion rate - those times we’re only trying to eke out that little extra bit of lift on our digital metrics.?

I can think back to times early in my career where we just didn’t have the right research or the right data to know where to focus, leaving us desperately trying to find value in the wrong places.

But hey, we always had those magic words to fall back on - “we managed to avoid loss through this test.” In other words, we saved you from putting something out that would have actually had a negative impact!

Did we really though??

If you’re running tests purely for potential conversion uplifts and those tests were never part of your site’s roadmap, then that was never going to be implemented. Nothing has been avoided because it wasn’t going to happen in the first place. That makes it easy to see right through what can be a disingenuous veil and start to feel a sense of mistrust.?

Unfortunately, that mistrust often then gets attributed to how testing gets run as a whole. All of a sudden it feels like we’re trying to find a silver lining to those negative results when there should be genuinely great takeaways, learnings, and actions to be had.

Avoiding loss can be the right callout for certain scenarios that we’ll talk about shortly but it’s far from the only way to get value from running tests.?

In fact, finding actions to take that benefit both you and your customers, no matter the result of your tests, is a hallmark of a great practice.

So how do you make sure you’re experimenting in a way that even if your test “loses”, you still win?

Test on the features that you’ve already started planning

You can’t avoid loss if there was nothing new planned in the first place, right? Let’s first touch?on where the idea of avoided loss is incredibly valuable.?

Chances are your digital teams aren’t testing every single new feature they put out on your site. Most teams don’t, especially when in many organisations, testing programs are often seen with a pure conversion lens. CRO often gets put into this whole other box that is about optimising what’s already there, not about testing something that isn’t yet.

The rub is that without running experiments, it’s very difficult to understand the impact you’re having on those new features or when redesigning an existing feature.

We all like to think the changes we make are always positive. The truth is that we don’t really know which changes we make have a positive impact, a negative impact, or just a neutral effect and aren't really worth putting our time into.

Run experiments on your new features and other big site changes before you launch them.?

See if a small portion of your users engage with your chatbot before you put it in front of everyone. Try elements of your new product page designs before you redo all of your templates. Slowly roll out that new payment method to make sure it doesn’t have unintended consequences.

If you were already planning it and now it doesn’t work, that’s a powerful bit of information from your “losing” test result. You can take that insight forward and either scrap the idea altogether or go back to the drawing board and tackle it in a new way. In that way, you really have found a way to avoid what could have been some serious pain.

Now if we combine what we get with some user testing to bolster those insights, we’re really cooking.

Pair your testing roadmap with research

Research is incredibly important to make sure that you're getting the most out of “losing” test results. While avoiding loss is great, compounding your learnings and ensuring you can also take action from those negative lift results is even better.

The thing is, without user research helping to guide the experiments that are run, it’s much harder to understand where things have gone pear shaped.?

Let’s say we work for a share trading platform and we’ve launched a shiny new application form. In this example, maybe we’ve seen a heavy drop off at a point where we’re asking users to fill in their Tax File Number and we want to test the idea of providing some additional, reassuring messaging to help new customers understand the importance as well as how their information is secure.

If the test loses, it can be difficult to understand what went wrong without the right context.?

Was reassurance not actually what users need there? Was it actually the Tax File Number input tripping people up or were the issues happening earlier in the form and that caused them to drop off? Was the messaging right but the placement on the page was wrong?

Interviewing even a handful of your customers or running some usability testing helps to contextualise your results. Now, even in the case where you have a losing result, the context from your research helps you to narrow down which issue still needs to be fixed.

Quick aside: Take your analyses a step further and run your results alongside customer experience analytics tools like Contentsquare or Hotjar. Having access to heatmaps and session replays across variations of your test provides another level of insight that helps to fill in some gaps        

Here’s the thing, you don’t need to wait until you have the test results to start thinking about what those outcomes might mean.?

Understand your learning opportunities right up front

Planning for different results early on is one of the best ways to understand if you’ve got a solid basis for your test. It’s my favourite little exercise to understand if you’ve got a strong hypothesis.?

Take the time to understand what the results of an experiment might be telling you:

  • If it outperforms the current site
  • If it’s worse than the current site
  • If it’s a neutral result

If you have learnings and actions that you can take in any of those scenarios, then you’ve got a great experiment on your hands. It means you aren’t coming up empty handed no matter the outcome.

What you don’t want is to have a test result where the only learning is that it was or wasn’t better at getting people to hit a button or visit a page.

In our share trading platform example up above, that’s a classic case where a loss or a neutral result alone isn’t really telling us enough to do anything with. It’s not helpful to see a negative result without having at least a directional indication for why it’s happened.

Instead, let’s double down on the value of research and say we came into that test armed with results from user testing that customers didn’t feel comfortable providing their Tax File Number just yet. Now when we run the reassurance messaging test and it doesn’t win, we know we were on the right track but haven’t quite nailed the execution so we can iterate to provide the right type of reassurance or we can save collecting their TFN for later on in the onboarding process post-application.?

We’re no longer just “avoiding loss” - we’re gathering valuable information on addressing customer needs that we can actually do something with.?

Running our learning opportunities exercise right up front when we’re planning our tests gives us an opportunity to address contextual gaps or shift to another test altogether, generating consistency that we’re always going to learn something actionable.

Summary

Ultimately, getting the most out of every test, whatever the result, is a product of good planning. Expanding how and why you test, incorporating research, and validating your hypotheses will all go a long way towards getting the most from every experiment.

All this is to say, terms like “avoided loss” shouldn’t raise red flags. Embrace those results and know how to get the most out of them. Done right, you’ll even see losing tests that lead to better long-term outcomes than many supposed winners.


Need any help getting the most from ALL of your tests? Reach out directly or visit us at drumlinedigital.com.au

Aron Weston

Founding Full Stack Engineer @ Fabra

11 个月

Great article man, keep it up ??

Stefan Rodricks

Personalisation | Experimentation | Digital Strategy

11 个月
回复
Ronnie Cheung

Solutions Engineer at Zitcha

11 个月

Love this! I did a panel on "How to fail: Optimize learnings from every test that you run" at the London Opticon, wow-ed everyone that the average win rate across all experiments is only 10%! Not enough people spend enough time digging into the data. I wonder if there is a metric we can use the nurture proper results analysis, as much as we do with velocity

回复

Getting back into writing can be challenging, but setting goals is definitely a step in the right direction! ?? Greg Wendel

回复
Rahul A.

Senior Data Product Manager, Publishing at Nine

11 个月

?? makes sense! And Greg if this is what you can write when you’re out of practice, I’m excited to see what’s coming in next few months!!

要查看或添加评论,请登录

Greg Wendel的更多文章

社区洞察

其他会员也浏览了