Stop wasting your google ads budget with this approach for setting negative keywords

Stop wasting your google ads budget with this approach for setting negative keywords

In this post I want to talk about an approach that is able to identify negative keywords with low sample sizes. In other words: you are unable to use your Adwords Conversions metrics to discover them.

Why you should care about this "low sample size" segment?

With googles "close variant" feature the noise in your search terms is getting bigger and bigger. The problem: It is difficult to identify those patterns when you have just some clicks on them. My guess is that the close variant matches will further increase in the future. For that reason we have to find a way to clean the traffic again in a very early stage.

Why your traditional approaches fail?

It is about sample size! Depending on the conversion rate of your business you often need hundreds of clicks to judge the performance. Let's look at the most common approaches for identifying negative keywords:

  • Filter on bad performing queries in the Search Query Performance Report: That is the simplest approach - you will get the smallest set of possible negatives and, if you just block the complete queries, it will also block the smallest amount of similar new queries that appear in the future.
  • Split Queries into N-Grams: Sample sizes dramatically go up when you map your performance data on single words or word combinations. To do this you split all your queries into pieces and look on that level on their performance. When you add those ngrams as broad negative keywords you will also block the maximum amount of unseen queries that contain the same pattern. You will already save a lot of money when you do this.

With all those changes on google side (close variants) when it comes to matching search queries to keywords I realized that even with the ngram approach sample size is still a problem. There are more and more new words (with just some clicks) we have to score somehow.

Let's use micro conversions for judging n-Gram performance

If we use micro conversions the conversion rate will be higher. This means that we need less clicks for judging the performance. Perfect!

A micro conversion that is available to everybody who is using google analytics are "Users that do not bounce". What we need for this is an unsampled(!) GA Report with Search Queries, Clicks and Bounces.

The idea is simple: you need just a small amount of clicks to already get a stable bounce rate for each ngram. Thesis: there is a strong correlation of high bounce rates and estimated conversion rate. For me I got this result for Conversion Rates grouped by Bounce Rate (for getting bigger numbers I rounded the Bounce Rate on 1 digit - in the chart it is displayed at "BounceRateCluster")

Es wurde kein Alt-Text für dieses Bild angegeben.

Now my approach is this: Estimate the Conversion Rate per n-Gram looking at the its bounce rate and define a cut-off value that gives you the best tradeoff between saved money and lost conversions.

"Cut-Off" Conversions?!! Oh no i'm afraid doing this

I got the point that everybody first is a little bit afraid when doing this. Good news: there are also ways for a setup that is not that strict:

If you split your keyword matchtypes on campaign level, just connect this bounce rate based negative list with e.g. all exact and phrase keywords and keep the mbroad keywords like they are (backup). Now you will see immediately changes in your performance and you can shift your budgets from mbroad to exact/phrase).

The great thing if there is still the mbroad bucket without the negative list: Things might change over time - also the bounce rates can change (maybe your product is in stock again). If there are "wrong/outdated" decisions in our negative list considering this new data, we can adjust our negative list. We get a learning/self correcting system.

In my opinion you can run a daily/weekly job on auto pilot that is setting/updating that negative list without being afraid of loosing a big conversion segment - you always have the e.g. mbroad backup :)

Let me know what you think about this approach or tell what your best practice looks like! What would be the biggest show stopper for not trying this approach (data processing of n-grams, getting unsampled GA data, etc.)?


要查看或添加评论,请登录

Stefan Neefischer ??的更多文章

社区洞察

其他会员也浏览了