A/B testing: A step-by-step guide in Python for Growth and Product Managers
A/B testing: A step-by-step guide in Python
From experimental design to hypothesis testing
In this article we’ll go over the process of analysing an A/B experiment, from formulating a hypothesis, testing it, and finally interpreting results. For our data, we’ll use a dataset from Kaggle which contains the results of an A/B test on what seems to be 2 different designs of a website page (old_page vs. new_page). Thanks to Akash .
Here’s what we’ll do:
To make it a bit more realistic, here’s a potential scenario for our study:
Let’s imagine you work on the product team at a medium-sized online e-commerce business. The UX designer worked really hard on a new version of the product page, with the hope that it will lead to a higher conversion rate. The product manager (PM) told you that the current conversion rate is about 13% on average throughout the year, and that the team would be happy with an increase of 2%, meaning that the new design will be considered a success if it raises the conversion rate to 15%.
Before rolling out the change, the team would be more comfortable testing it on a small number of users to see how it performs, so you suggest running an A/B test on a subset of your user base users.
1. Designing our experiment
Formulating a hypothesis
First things first, we want to make sure we formulate a hypothesis at the start of our project. This will make sure our interpretation of the results is correct as well as rigorous.
Given we don’t know if the new design will perform better or worse (or the same?) as our current design, we’ll choose a two-tailed test:
H?: p = p?
H?: p ≠ p?
where p and p? stand for the conversion rate of the new and old design, respectively. We’ll also set a confidence level of 95%:
α = 0.05
The α value is a threshold we set, by which we say “if the probability of observing a result as extreme or more (p-value) is lower than α, then we reject the Null hypothesis”. Since our α=0.05 (indicating 5% probability), our confidence (1 — α) is 95%.
Don’t worry if you are not familiar with the above, all this really means is that whatever conversion rate we observe for our new design in our test, we want to be 95% confident it is statistically different from the conversion rate of our old design, before we decide to reject the Null hypothesis H?.
Choosing the variables
For our test we’ll need two groups:
A control group - They'll be shown the old design
A treatment (or experimental) group - They'll be shown the new design
This will be our Independent Variable. The reason we have two groups even though we know the baseline conversion rate is that we want to control for other variables that could have an effect on our results, such as seasonality: by having a control group we can directly compare their results to the treatment group, because the only systematic difference between the groups is the design of the product page, and we can therefore attribute any differences in results to the designs.
For our Dependent Variable (i.e. what we are trying to measure), we are interested in capturing the conversion rate. A way we can code this is by each user session with a binary variable:
0 - The user did not buy the product during this user session
1 - The user bought the product during this user session
This way, we can easily calculate the mean for each group to get the conversion rate of each design.
Choosing a sample size
It is important to note that since we won’t test the whole user base (our population), the conversion rates that we’ll get will inevitably be only estimates of the true rates.
The number of people (or user sessions) we decide to capture in each group will have an effect on the precision of our estimated conversion rates: the larger the sample size, the more precise our estimates (i.e. the smaller our confidence intervals), the higher the chance to detect a difference in the two groups, if present.
On the other hand, the larger our sample gets, the more expensive (and impractical) our study becomes.
So how many people should we have in each group?
The sample size we need is estimated through something called Power analysis, and it depends on a few factors:
Power of the test (1 — β) — This represents the probability of finding a statistical difference between the groups in our test when a difference is actually present. This is usually set at 0.8 by convention (here’s more info on statistical power, if you are curious)
Alpha value (α) — The critical value we set earlier to 0.05
Effect size — How big of a difference we expect there to be between the conversion rates
Since our team would be happy with a difference of 2%, we can use 13% and 15% to calculate the effect size we expect.
Luckily, Python takes care of all these calculations for us:
We’d need at least 4720 observations for each group.
Having set the power parameter to 0.8 in practice means that if there exists an actual difference in conversion rate between our designs, assuming the difference is the one we estimated (13% vs. 15%), we have about 80% chance to detect it as statistically significant in our test with the sample size we calculated.
2. Collecting and preparing the data
Great stuff! So now that we have our required sample size, we need to collect the data. Usually at this point you would work with your team to set up the experiment, likely with the help of the Engineering team, and make sure that you collect enough data based on the sample size needed.
However, since we’ll use a dataset that we found online, in order to simulate this situation we’ll:
*Note: Normally, we would not need to perform step 4, this is just for the sake of the exercise
领英推荐
Since I already downloaded the dataset, I’ll go straight to number 2.
df = pd.read_csv('ab_data.csv')
df.head()
There are 294478 rows in the DataFrame, each representing a user session, as well as 5 columns :
We’ll actually only use the group and converted columns for the analysis.
Before we go ahead and sample the data to get our subset, let’s make sure there are no users that have been sampled multiple times.
session_counts = df['user_id'].value_counts(ascending=False)
multi_users = session_counts[session_counts > 1].count()
print(f'There are {multi_users} users that appear multiple times in the dataset')
There are 3894 users that appear multiple times in the dataset
There are, in fact, 3894 users that appear more than once. Since the number is pretty low, we’ll go ahead and remove them from the DataFrame to avoid sampling the same users twice.
users_to_drop = session_counts[session_counts > 1].index
df = df[~df['user_id'].isin(users_to_drop)]
print(f'The updated dataset now has {df.shape[0]} entries')
The updated dataset now has 286690 entries
Sampling
Now that our DataFrame is nice and clean, we can proceed and sample n=4720 entries for each of the groups. We can use pandas' DataFrame.sample() method to do this, which will perform Simple Random Sampling for us.
Note: I’ve set random_state=22 so that the results are reproducible if you feel like following on your own Notebook: just use random_state=22 in your function and you should get the same sample as I did.
Great, looks like everything went as planned, and we are now ready to analyse our results.
3. Visualising the results
The first thing we can do is to calculate some basic statistics to get an idea of what our samples look like.
Judging by the stats above, it does look like our two designs performed very similarly, with our new design performing slightly better, approx. 12.3% vs. 12.6% conversion rate.
Plotting the data will make these results easier to grasp:
The conversion rates for our groups are indeed very close. Also note that the conversion rate of the control group is lower than what we would have expected given what we knew about our avg. conversion rate (12.3% vs. 13%). This goes to show that there is some variation in results when sampling from a population.
So… the treatment group's value is higher. Is this difference statistically significant?
4. Testing the hypothesis
The last step of our analysis is testing our hypothesis. Since we have a very large sample, we can use the normal approximation for calculating our p-value (i.e. z-test).
Again, Python makes all the calculations very easy. We can use the statsmodels.stats.proportion module to get the p-value and confidence intervals:
5. Drawing conclusions
Since our p-value=0.732 is way above our α=0.05 threshold, we cannot reject the Null hypothesis H?, which means that our new design did not perform significantly different (let alone better) than our old one :(
Additionally, if we look at the confidence interval for the treatment group ([0.116, 0.135], or 11.6-13.5%) we notice that:
It includes our baseline value of 13% conversion rate
It does not include our target value of 15% (the 2% uplift we were aiming for)
What this means is that it is more likely that the true conversion rate of the new design is similar to our baseline, rather than the 15% target we had hoped for. This is further proof that our new design is not likely to be an improvement on our old design, and that unfortunately we are back to the drawing board!
Chief Executive Officer at Headliners
2 年Finally.. you did it!