How to Use A/B Testing To Validate or Invalidate Hypotheses for Product Growth.
There is a saying that you never can tell what will work until you test! I recall split testing one of my Facebook ads, and to be honest, I had a specific ad creative in mind that I felt would perform better, but to my surprise, the second creative outperformed the first and resulted in more conversions.
This is to argue that assumptions rarely work since how our audiences interact with our copy and marketing material differs from ours. As a result, testing is required in order to optimize, hence the need for A/B testing.
A/B testing is the science of testing a variant (A) say the present copy of a landing page against a new copy you are testing against (B) to see if it will improve performance. It helps to make the right decisions, speed up and have trustworthy decisions based on proven data.
When to use A/B Testing
1. For deployment:?
When you launch something on your website, it might be a new feature, an update, or something else entirely. You want to determine whether your deployment will have a positive or negative influence on the website's conversion rate. It can go live if it has a favourable impact. If not, it is not recommended to use it for obvious reasons.
2. Research:?
A/B testing can also be used for research purposes. For example, you may have determined that the "Sign Up" button is not noticeable based on your study and wish to modify the button's colour or text. Before implementing the new edit, you might want to run an A/B test to see how your users react to it.
According to the ROAR (risk, optimization, automation, and re-think) model, a website must have at least 1,000 conversions before conducting an effective online A/B test to avoid false positives. This is because the data is insufficient to determine what works and what doesn't for users. Purchases, leads, clicks, or whatever objective the company is aiming for are all examples of conversions.
Based on your current website data, this free AB Testguide calculator or the ab test tool created by CXL can help you calculate the smallest sample size you'll need to execute a successful A/B test.
To gain insight for A/B tests, conduct research.
The 6V models (value, versus, view, voice, verified, and validate) are a great tool for conducting good research.
- Value of the company:?
You must have a comprehensive knowledge of the company's mission and vision, as well as a thorough understanding of the product and KPI focus, as well as short and long-term objectives. These will aid in the comprehension of the company's values and objectives.
- Versus your competitors:
This is where you do competitive research. Who is your competitor? Conduct a SWOT analysis to see what their strengths, weaknesses, opportunities, and threats are in comparison to your brand. You may also sign up for an account on the site to see how the user onboarding process works.
- View of the data:
Analyze the information you currently have. Google Analytics is one example of a useful tool. What is the users first experience with the website like? Where do visitors start on the site? Where do they come from? Is there a notable differences between segments or products? What's the behaviour on the important pages like? What is the progression of users and how do they drop off? ? It will be easier to do efficient A/B testing if you analyze these data.
- Voice of the customers:
Analytics can tell you what people are doing, but it won't tell you why, so you'll have to talk to them. Also, sales and customer service are critical since they provide vital information to users because they engage with them regularly. Pop-ups and online feedback surveys, as well as client interviews, might be beneficial.
领英推荐
- Verified data:
Platforms such as Google Scholar, Deepdyve, and Semantic Scholar assist with data verification by utilizing academic literature from a variety of publication formats and disciplines.
- Validated data:
What is the business case for this? Uplift? Is there a special approach for interacting with users?
Hypothesis Setting
Prior to experimenting, a hypothesis must be developed. One of the reasons is that it brings everyone together and provides guidance on what should be evaluated. In crafting the hypothesis based on research done, you would describe a problem, a proposed solution and a predicted outcome.
For instance: If I APPLY THIS, then this BEHAVIOURAL CHANGE will happen among THIS GROUP because of THIS REASON.? For example, If we reduce the number of fields in this form, the conversion rate increase will happen among users because it takes less time.
Prioritize Your A/B Tests
When you've finished your research and realize there are a lot of variables to test, you'll need to prioritize. PIE and ICE are two well-known prioritization methods. They are nearly identical.
In the ICE framework, ICE stands for impact, confidence, and effort. PIE stands for Potential, Importance, and Ease, and it aids in estimating the experiment's impact, as well as your confidence in achieving the expected result, which could be based on previous success or having a reference point such as what competitors are doing, as well as the effort required to accomplish it. It examines the test's chances of success, as well as the impact it will have on the company's bottom line when implemented.
Later on, the PIE was modified to PIPE:
- Potential: What is the chance of the hypothesis being true?
- Impact: Where does the hypothesis have the biggest effect?
- Power: What are the chances of finding a significant outcome?
- Ease: How easy is it to test and implement?
Configure your A/B Test in your tool
Google Optimize is a common tool for A/B testing. Add an experience, choose an A/B test, default and challenger variations, create a custom event, and run the test.
During the testing phase, use Google Optimize to display the update to half of your visitors . The test should be repeated until a statistically meaningful sample of visitors has been obtained (maybe like four weeks). When there is enough information, Google Optimize will tell you whether your adjustment had a substantial impact on conversions. You should consider applying it if it made a major positive effect. Keep track of your findings to help you plan future experiments.
Always conduct a test to get the most out of your data, otherwise, your traffic will be wasted.
Cheers to effective A/B testing!