Machine Learning Bid Optimization for Google Ads
Russell Miller
Helping founders navigate successful exits | Platform growth at Axial. ??
by Igor Grigorev and Russell Miller
Introduction
To get an ad shown to Google Search customers, advertisers participate in the Keyword Bid Auction. Keyword bid is one of the main factors which decides what place your ad is going to rank on the page. So it’s essential to manage and monitor bids.
There are multiple tools to do Keywords bids: UI, Editor, Google Ads Scripts, Adwords API. But the most important choice is to manage bids by yourself or with Smart Bidding Strategies, like Target ROAS (tROAS) or Target CPA (tCPA).
The main problem with Smart Bidding Strategies is that it allows you only to optimize one parameter like a number of clicks, CPC, etc.. So if your goal is optimizing two parameters (e.g. a number of conversions and CPA), then you still have to manage those smart strategies to achieve that. Another downside of Smart Bidding is that we aren’t given information about how it works, and we can’t adjust it. So we wanted to build a system we had control over.
Luckily for us, there are ways to do automatic Bid Management: Google Ads Scripts and Adwords API. We were using Scripts API because it seemed more convenient. Scripts API basically gives an opportunity to do everything that you can do in the Web UI. For instance, we can optimize our Bids to reflect lots of parameters like our customer demographic data (Age, Income, Gender, etc.), Time of Day, Day of the Week, Device Type, etc.. So in this article we are going to tell you how we did automatic Bid Management for Google Ads, the system we set up, and the conclusions we drew.
Workflow
Before doing automatic Bid Management we need to answer the question: how should we judge if Keyword Bid is a proper one or not? One of the basic ideas is to look at the target metric of the Keyword (CPA, ROAS, CPC, etc.) and compare it to the breakeven value or the mean statistic value. In our case, we wanted to focus on our breakeven CPA (cost per acquisition), the CPA at which we are profitable for a client.
We had the idea that we should adjust every parameter relative to CPA, and bid down Keywords with CPA above our breakeven CPA, and bid up Keywords with CPA below our breakeven. Ultimately this will lead us to cut off the costs and lower the average CPA, because we will end up bidding less on underperforming Keywords and more on successful Keywords. And of course lower CPA ensures other improved metrics like higher ROAS. If we want to stick to the same Budget with a bid adjustment script, we can replace breakeven CPA with mean CPA, so lowering bids of some Keywords will be compensated with increasing bids of others.
Let’s visualize our discussion with CPA distribution. Expected graph of an account with a diversified pool of Keywords looks like Half Normal Distribution. With Probability Density function:
We get this distribution most of the words are successful ones with reasonably low CPA and there are others which are in the tryouts phase or somewhere in the middle. Our approach intends to make the peak near zero higher and thicken the tail of the distribution.
Speaking more algorithmically let’s go through optimization against the Day of the week. We can pull data for every Keyword segmented by every Day of the Week. So we have a CPA for every Keyword for every day of the week. Then we calculate mean CPA for every Day of the week, also we can calculate overall mean CPA. Then we just compare means: it can be the case that Friday, Saturday, Sunday has a lower CPA than mean CPA and others have a higher one. Then we make bids for Friday, Saturday, Sunday higher and lesser for other days. The basic formula is:
Now that we know what we are doing let's have a look at a bigger picture. You may have already started to ask the question: where is machine learning over here? Because it seems like we are just doing optimization over here, but it's a bit more complex. Since there is a triangle of Google Ads, search customers and advertisers. Google seeks to make every participant of this triangle happy and hold the balance. Obviously, this is done via really fancy Machine Learning models. And to the people outside from Google this model is just a black box to which we can pass parameters (like Keywords, Bids, etc.) and get results. And in Machine Learning it's a usual case when we use some extensive model (e.g. XGboost, NN) and need to find the best hyperparameters for it. This is so-called hyperparameter tuning. The simplest approach is just trying out all of them and picking the one with the best score. And there are a lot of good open-source tools to do it properly: Bayesian Optimization, Hyperopt, etc., which basically utilize smart random choice technique (random corrected with respect to previous results). The problem is that all such methods try out a lot of random hyperparameters configurations before getting the best one. But we don't possess such luxury. Because a random choice of hyperparameters in the Google Ads case would cost us a lot. Because to assess performance (get the score) of every hyperparameter configuration we spend real money to get real CPC, CPA, ROAS. That is why we came to the way of small hyperparameter adjustments in the right direction, which should lead us to the some sort of local maximum of the target metric (not really because it changes all the time and there is no way to find exact one).
It’s important to pick the right keywords to compare. Keywords can have different purposes (e.g. Exact match vs Broad match). So arranging Keywords in groups with approximately the same characteristics is definitely a good idea. To accomplish that we did the Keywords Distribution Analysis. The main question for us was if Keywords with a Brand name (e.g. “Puma sneakers”) are very different from Generic ones (e.g. “sneakers”). In theory groups from the examples above are very distant ones and should be treated separately. But during research, we found that for one account this is true, but for another one, we don’t have enough evidence since multiple statistical tests show mixed results.
Another notable consideration is keeping Bid adjustments small. Just because the world changes all the time and we are doing calculations on just some sample, so we should trust those, but not too much. That’s why we do those adjustments regularly (once per week or month), when new data comes to make this iterative learning of sorts.
Results
- Multiple Bid Adjustment scripts were scheduled to run on a regular basis to optimize Keywords Bids by every possible parameter:
- Location
- Keyword Match Type
- Audiences
- Device Type
- Day of the Week
- Time of Day
- We’ve seen stable growth of our target metrics until they reached the plateau level.
Comparison of this approach with Smart Bid Strategies:
- target ROAS, target CPA underperformed our approach in the three one month long experiments
- ECPC showed better results than our bid automation
- ECPC combined with our bidding stragegy showed superior results compared with all mentioined above.
Conclusion
Pros:
- Automation: once a script is set up and scheduled to run you only need to monitor results to do a sanity check from time to time.
- Diagnostics: just look at the graph of your target metric from time and see improvement.
- Scaling:
- Scripts API allows you to manage multiple accounts from one script.
- Nearly all of this can be done at Microsoft Ads with a minimum amount of changes.
Cons:
- Time: lots of resources should be spent on the initial set up of all bid adjustment scripts.
- Tools: Google Ads Scripts API is rather archaic (very old modified JS) and IDE is just notepad for the most part.
Interested in learning more? Zavient is a data-driven SEM agency. Contact us at [email protected].
Topics
Google Ads, Bing Ads, Bidding Strategies, Bids, Keywords, SEM, SEO, AdWords, Keyword Bid Auction