3 Things I’ll do differently in my next A/B Test

3 Things I’ll do differently in my next A/B Test

Last week, I completed Peep’s course on How to Run Tests. It was full of eye-opener insights and I learned a lot.

Trimming the platitudes from the opening, here are the three things I’ll try to do differently in the future.


Running Sequential Tests

Let's first understand what a sequential test is. 

Have you ever tried changing only a heading or only an image to check if your conversion rate changes?

Yup, that is a sequential test. And as per Peep, a sequential test is not a real test to optimize conversion rate. 

The reason is that the conversion rate isn't a fixed number. It changes depending on many factors like an upcoming next big event, or a move by a competitor, and public health disaster like covid...

For example, if in India someone is running a sequential test on an eCommerce site during Amazon's/Flipkart's big billion sales, many buyers might flock there. 

The comparison will be off. The test results won't be trustworthy. 

So no sequential testing. 


Always calculate sample size first

One of the important aspects of our test is to confirm what the chances are that the results or changes are due to a pure chance or factors that you can't explain. 

You confirm that by finding the statistical significance or p-value where p < 0.05.

So in a way, statistical significance tells us that the results are unlikely due to a chance.

But there is where we could err. If we stop our test as soon as it hits a significance level of 95% or p < 0.05, we might end in a soup. 

So a decision to end a test only on the basis of the p-value can be misleading. For example, you can see how the p-values are shifting for different tests in the below graph during the 20 days period. 

No alt text provided for this image

The thumb rule is to not look for statistical significance before you calculate the sample size. In other words, you calculate the sample size first, then you let your test run, and when it hits the required sample size, you check for the statistical significance. 

Before that, don't check the statistical significance. And never stop your test early. 


Testing isn’t a zero-sum game! 

Instead, it is a game of validated learning. 

You don't run a test to check which variation outperforms. The goal of a test is to learn and to keep closing the understanding gap you have regarding your customers’ behavior. 

With each test, you could:

  • Improve our understanding to communicate better with your customers 
  • Find what is working and what is not
  • Uncover conversion inhibitors and conversion contributors.
  • Formulate better hypotheses
  • Increase the probability to get better results
  • Always win because the intent is to learn

And last but not the least, going with a learner’s mindset can keep the mind from worrying about the results! 

The goal of a test is not to get a lift, but rather to learn.

– Dr. Flint McGlaughlin, Managing Director and CEO, MECLABS

Since the idea is to learn continuously from each test, two important things to keep in mind about testing are:

What to test and when? 

First thing first, testing should always come after conversion research.

After the conversion research, you find all the obvious problems and fix them first -- pick the low-hanging mangoes first.

Next, you exhaust the other hypothesis you formulated based on your research.

Only then you shift to creative testing where you formulate your hypothesis based on your own ideas/assumptions.

How many changes per test?

It depends on your website traffic. On a high-traffic website, one change per test can work.

On low traffic and low conversion site, it may take months and in some cases a year to get enough traffic to reach statistical significance. So a good idea is to test 2-3 changes per treatment. 

要查看或添加评论,请登录

社区洞察

其他会员也浏览了