Do's and Don'ts of A/B Testing Design, Development and QA.
Ndifrekeabasi Essien. DipFA. MSc Data Science (In view)
BI Intelligence || Data Analytics & Science || Stakeholder Relationship Management || Document Control || Growth Enthusiast || Business analysis and Business Intelligence Specialist.
Design, Develop and Quality Assurance your A/B test.
Hey there, in my previous article I took a sneak peak into A/B testing and what it is about, the value of A/B testing, when to use it and the 6V model of A/B testing.
In this article, I will be talk about some do’s and don’ts when designing your experiments, developing your experiments and conducting quality assurance for you’re A/B test experiments. Do have a good read!
Designing
i. 1 Challenger, not two or more. The first is to use 1 challenger and not two or more. If you design your A/B tests design just one challenger, it's 'A' 'B' control for the challenger default for the challenger, not two, three, four, or five variations. Adding more variations has lots of implications, your minimal detectable effects will be hit at heart. So you will only be able to detect impacts of like 20% or 30%, you will have a bigger chance of detecting false positives. There are lots of strings attached, if you go for more than just one variation. Having two or three variations in the same experiment messes things up.
ii. Don’t feel limited, think outside the box but also think about the implementation costs. The second is do not feel limited, think outside the box and come up with some really good design proposals to prove that hypothesis, that there is a problem. And you have a possible solution which you going to measure and you are designing the solution as well. If this experiment is becoming a winner, how much money will this cost the company to implement this? It may not be easy to estimate the cost but calculate it and see if it is the experiment you should run. The change you are making should be visible to the company. If they can't see, it won't have an impact.
iii. It should be scannable, easy to process, and should be usable. Thirdly, it should be scannable, easy to process and be usable, you can trust usability guidelines out there as there are many and also big websites have some sort of a best practice usability guideline that you can use.
iv. More than one change. If you make more than one change, that's fine. It could be because of research vs optimization. , if you are really visualizing on a specific hypothesis and you want to prove that this hypothesis is true, then it makes sense to really limit to your design on that one specific change to show that this hypothesis, this one is the one that makes a difference ,and not some other hypotheses of because you change three elements, it could be this or it could be that. If you're there for just optimization, it's fine. If you need to make more than one change, because the combination to the team will probably make the biggest impact, that's fine.
v. Mirror the design with hypothesis. You are going to make a change as a designer to have a specific impact on a specific group of users because of some reason or the other. That's the whole problem statement of you hypothesis, but if you create something that is not related anymore to the hypothesis, if your solution is not solving the problem that the operations field team thinks there is then you're solving something else.
vi. Consider the Minimal Detectable Effect. The final do and don’t is considering the minimal detectable effect of your experiment. Based on the numbers on the specific location you are going to run the A/B-test, you can calculate how big the impact must be, to have a high enough chance to be detected if that impact you created is real. it has all to do with power calculations. if you take bigger steps, you also take a bigger risk. If you take really small steps and you have the data to do so, you're taking a less big risk.
Develop
For designing your experiment, there are also some do’s and don’ts.
i. Do not use the "What you see is what you get" code editor in your A/B-testing solution. do not use the "What you see it was get" ghost editor especially if you make bigger changes, you can run into all sorts of problems. You want to have someone who writes proper codes because you want to inject this code on the webpage.
ii. It is an experiment, if it works, it works. Even if the experiments is slowing down a bit or not collecting all the data you want, the information you collect you can still run experiments because if it wins, it probably means that the improved solution, the solution you're going to deploy will probably even be a lift.
iii. If you cannot make it work within time, propose design changes. As a developer, you will need to propose design changes.
iv. Consider injecting client-side code also in the default unless it is feature toggling or server-side coding.
v. Add measurements and extra analytics events to you code. You will add measurements and extra analytics events to your code because you won't be able to be able to track the A/B experiment and because you're going to measure this in our analytics tool.
You can read up about server-side vs client-side A/B testing tools here.
Quality assurance for your A/B Testing.
Under quality assurance, we have two quality assurance steps. Step one is quality assure code. Step two is quality assure once it’s implemented in the tool running the A/B experiment. Let me talk about step one. Should you quality assure your code on any device or browser combination? It depends. It's a business decision. It has to do with risk, trusts, and also maturity in running experiments.
That’s it for this article, I will be posting more articles in the following weeks with more knowledge and insight about Growth Marketing.
If you want to learn more in detail about growth marketing or any other marketing course, feel free to visit CXL Institute website. They have a wide range of marketing courses and top 1% professionals in different fields of marketing that impact first class knowledge. You can also apply for their mini-degree scholarship programs just like i did.
Catch you later!