How to Approach A/B Testing for Conversion Optimization (CRO)
Valparaíso, Chile - Photo by Luis Alfonso Orellana on Unsplash

How to Approach A/B Testing for Conversion Optimization (CRO)

A/B Testing is the most prominent and essential element of the conversion rate optimization process. It is a scientific UX research process of assessing the comparison of two versions of a webpage and then using statistical analysis to figure out which version performed better. The involvement of science behind A/B testing and the utilization of statistics to understand the data achieved from this process makes it so alluring that CRO is often confused as nothing more than A/B testing.

One prominent reason for this confused approach is a result of how A/B testing tools are educating their target audience by selling the whole process of CRO white-labeled as a process of A/B testing. There is nothing wrong with that from their business perspective as their primary goal is to sell A/B testing tools. However, it shadows the balance of efforts that each step of the conversion optimization process requires. This is the reason why I have decided to share my own learnings gained through observations and experience of approaching A/B testing.

First of all, let's clarify this misconception that A/B testing is a process. Rather, it is a scientific methodology of verifying your hypotheses. It would be realistic to think of CRO process rather than A/B testing, to begin with. When you realize that your business goal is conversions and not testing or experimentation then your focus would shift towards understanding the whole process rather than tools and you would be able to form better decisions on when to start with testing, what to test, and how to test.

When to begin with A/B Testing?

A basic benchmark to know when you certainly need to start A/B testing is when you have optimized the conversion rate of your website enough though the actionable insights extracted from iterative conversion research process that you no longer see any significant improvement through the same process. Push harder to not reach this point where you have exhausted all your options with conversion research. This is necessary because as you keep iterating through the research, you gain an understanding of what your audience wants and what you are offering them.

When your proposition aligns with the needs of your audience, you begin to observe the increase in conversions. This is the point where you put the effort to increase your audience pool through the insights extracted from heuristic evaluation, user testing, and tapping on to the right data from web analytics. Your website/product/service is your offer and you have to gather more and more audience who shows interest in your offer though a cycle of improving and extending your user personas to keep on building a larger audience.

If your website does not have enough traffic, you do not need testing rather you need to work on what you offer with the help of proper conversion research process. Even when you think you have sufficient amounts of traffic and your research process has reached a peak, ask yourself again and again if your audience truly has the potential to convert. If you are not sure about that, don't waste your A/B testing on an audience that is not going to convert anyway rather iterate through the research process again originating from redefining your user personas.

Remember, when you have a strong scientifically driven conversion research process then you will have strong evidence-based hypotheses to perform testing.

Defining Your A/B Testing Strategy

Your strong conversion research process gives you the confidence that you have figured out the obvious solutions for specific problems. Now the usability issues are resolved, the performance of your website or landing page is optimal, and copywriting seems to be great. It becomes apparent that you are hitting the local maximum of a certain web page or a design or in information architecture.

This is the point where you have clear hypotheses like a small change in a headline or a change in color of a certain call-to-action button can significantly improve the conversion rate. Now you have your original version (A), also known as Control and a modified version (B), also known as Variation, with the proposed hypothesis. Now that you know what you are exactly going to put to test against what, that is where A/B testing tools help you create and run your experiments.

Unfortunately, unless you have millions of visitors on your website, you cannot afford the luxury of testing just one change on your website as it requires a huge sample size. So you can group together multiple hypotheses based on high impact changes and put those 10 or 20 big changes on the variation and put it to the test against the control. Your ability to attribute the success or failure to a certain change will be decreased, however, you will be able to get clear and significant results.

When considering between A/B or multivariate testing, remember that multivariate tests require more traffic and more time to reach the statistical significance. Most of the highest-rated CRO agencies conduct 1 multivariate test for every 10 A/B tests where they are using the latter for determining the winning layout and the former to determine winning interactions of elements on the variations.

How to Conduct A/B Testing

When you have an iterative conversion research process, you keep coming up with a whole bunch of issues. So to get started, categorize your finding in the following 5 categories:

  • Just Do It (easy to fix issues and micro-conversion opportunities)
  • Instrument (technical issues that can be fixed without any testing)
  • Investigate (things that need further research to identify the exact problem)
  • Hypothesize (we know there is a problem but we don't clearly see a solution)
  • Test (items for which we have a strong hypothesis and we clearly know what to test)

When we have our research-based finding categorized in the aforementioned categories then the next step is to score those issues in each category with a 5-star scoring system:

  • 5-star (most critical issues which are visible and can have the highest impact)
  • 4-star (critical issues yet not so visible to the website visitors)
  • 3-star (major usability or conversion issue)
  • 2-star (lesser usability or conversion issue)
  • 1-star (minor usability or conversion issue)

Along with the scoring system, also consider the ease of implementation with time, complexity, and risk as well as opportunity score where you identify fixing which issues can provide the biggest ascent to the conversion rate.

Once you have prioritized what you want to put to A/B testing, consider the following three very important factors:

  1. large enough sample size
  2. long enough testing duration
  3. and only then, look for statistical significance

To figure out all these factors, make use of an A/B Testing calculator. Ignoring these factors can have a highly negative impact causing inaccurate results.

Integrate your A/B testing tool with Google Analytics and send all your experimentation data and make use of advanced segmentation on your A/B testing data to analyze the results in Google Analytics.

Conclusion

I have extracted and shared these insights with you with the help of two highly recommended courses on How to Run Tests and Testing Strategies by Peep Laja from CXL Institute. Share your thoughts and experience with others in the comments below about how you approach A/B testing and what you agree and disagree with in this article.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了