Why A/B testing falls short, part 3: You need to wait a looong time
This is part 3 of a 5-part series highlighting challenges you’re likely facing in your A/B testing strategy. Check out the first 2 parts of this series:
Part 1 You’re experimenting with your customers
Part 2 You don’t understand WHY!
Problem #3: You need to wait a looong time
A/B testing takes time. Though technology platforms such as Optimizely and Adobe Target have made it much easier and more efficient, there is still work, and often an incredible amount of time that is required to get meaningful results.
First, (and hinted in the name of the test) we need to create multiple versions. At the very minimum an ‘A’ and a ‘B’. So, more thought, more design work, more coding, and more waiting for our colleagues. Secondly, by having multiple versions out in the wild, we’re bound to have more work orchestrating the operation and collecting (+ understanding) analytics data.
Most importantly, in order for the test to be meaningful, it must pass a measure of quality in achieving statistical significance. In other words, we need to be convinced that the outcome is not a result of chance.
How do I reach stat sig?
I’ll spare us all the statistics crash course, because I think there are a couple of simple concepts that should be our key focus:
- A sufficient sample. There needs to be enough data (of both success and failure of our test goals) in order to consider the results to be meaningful.
- A representative sample. If we’re only looking at traffic during the Winter, or only during the holiday season, our test may not be valuable to assume the same behavior is expected in the Summer. Pretty simple common sense, right? We need to make sure we cover multiple sales cycles to avoid any bias.
One of the reasons a large and representative sample is required is that our data is very limited. Typically, we are only collecting success vs. failure for each person. Furthermore, since we do not know why people are converting (or not), we must collect an enormous sample and draw broad conclusions.
We need results NOW!
If you’re marketing for Amazon, Facebook or even Yahoo; you probably have enough traffic and plenty of resources to run thousands of micro A/B tests every week. Time is not your enemy. On the other hand, if you’re like the rest of us, where traffic is somewhat limited and iterating without end is not an option. And frankly, if you’re like me and want results NOW, A/B testing requires you to wait a long time. Probably too long.
Tomer Azenkot is the Chief Revenue Officer at WEVO.
Please follow @Tomer and @WEVO on LinkedIn to hear more ideas on how to improve digital experiences.
About WEVO:
WEVO is the first technology platform that optimizes digital experiences before you go live. Leveraging crowdsourced visitor insight and artificial intelligence, WEVO generates recommendations that have proven to significantly increase conversion. WEVO has successful experience working with Wells Fargo, Fidelity Investments, Blue Cross Blue Shield, Harvard University, Intuit and others.
I can't wait to dig into our first set of results from our first go with WEVO next week!