What Google Isn't Telling You About Quality Score

Google this week has published a new white paper on Quality Score and finally updated their 5-year-old video on how Quality Score works.

I asked them “why now” and they said that it’s just time to redo the old stuff, but part of me thinks that they’re trying to quell a Quality Score rebellion that myself and others (including Frederick Vallaeys, etc.) may have inadvertently started – the wording of some points in the new materials seem to directly reflect points that I’ve articulated in articles I’ve published in the last year.

Image via Kirk Williams - @ppckirk

As usual, Google trotted out the same old “happy users,” “happy advertisers” and “happy Google” platitudes. It’s mostly a rehash of the same old Google fairy dust about Quality Score, but there are a few interesting nuggets in there worth responding to (and debunking).

Downplaying The Significance Of Quality Score

Google says:

Quality Score Is a Helpful Diagnostic Tool, Not a Key Performance Indicator

Why: Your Quality Score is like a warning light in a car’s engine that shows how healthy your ads and keywords are. It’s not meant to be a detailed metric that should be the focus of account management.

I’ve often argued that Quality Score (or, essentially, Click Through Rate) is *the* most important key performance indicator to track in PPC, since it plays a huge role in AdRank which in turn directly impacts CPC, Ad Position, and Impression Share, and thus directly impacts number of conversions and cost per conversion. Across the many accounts we manage here at WordStream, the ones with higher Quality Scores are almost always better off than those with lower average Quality Scores. So why the heck are they downplaying the significance of the metric?

I think it’s because by definition, half of us have low Quality Score accounts (they grade on a curve – we can’t all beat the average expected CTR – by definition, half of us will be “below average”). By my estimation, 66% of Google revenues come from below average Quality Score keywords (due to the CPC penalties for low QS keywords and CPC discounts on high QS keywords). So I think it’s smart and understandable for Google to downplay the significance of QS – but you, as an individual advertiser, should know better.

Furthermore, if the check engine light in your dashboard is flashing, it means your car could break down soon and you are endangering the lives of yourself and your passengers. Strongly disagree with Google on this one and if your “check Quality Score” light is on, I think you should definitely focus on fixing it.

Google Finally Admits Performance Of Related Keywords Matters

This was pretty big – the new Google white paper says:

“For Newly-Launched Keywords, Performance on Related Keywords: Does Matter”

Note that Google has never explicitly stated this before. Now on this one, I’m pretty sure I’m the reason why they changed their stance. I recently pointed out that all accounts have keywords with no clicks and no impression data that still have Quality Scores, and that the “default Quality Scores” were always high for great accounts, and always low for terrible accounts. So clearly related keywords can affect the Quality Score of other keywords in your account.

The corollary of this is that it confirms another theory of mine – having keywords with higher average CTR/Quality Score has a beneficial impact on other keywords in your account, which is why I always run a branded keyword campaign. (Branded keywords get super-high CTR’s and thus can float your whole account higher. In other words, if you’re starting off with a high QS account, your new keywords will always have a higher Quality Score out of the gate.)

Google Reveals The Weighting Factor Of Ad Format Impact On Ad Rank (And Is Likely An Error)

A few months ago Google announced that the use of Ad Extensions would impact AdRank but didn’t provide much detail into the precise weighting of the new “Ad Format Impact” factor.

In the new video, Hal Varian gives us 7 clear equations to work with, and using a little algebra you can reverse engineer the weighting of Ad Format Impact on Ad Rank in relation to other factors (Bid and Quality Score).

Four equations come from a calculation of ad rank @ 4:55 in the video:

And three more equations come later when he calculates their appropriate actual CPC based on the 2price of the auction at 6:15. (So the actual CPC would be the required bid to have an ad rank of the advertiser immediately below them – the first advertiser would need a bid of $1.73 to have an ad rank of 15).

He doesn’t give us exact numbers to work with for Quality and Format impacts, so I’ll use discrete variables to represent the impacts of “High”, “Medium”, “Low”, and “No” rated influences of Quality (Q) and format (F).

Expressed as formulas, these 7 equations look like this, with ad rank on the right of the equation:

Let’s normalize all bids to $1 and directly compare ad rank across identical bids:

What does this show us? Well, these numbers imply that the MAJORITY of ad rank is influenced by the impact of ad formats. For instance, take these 2 equations:

Same quality impact, but the difference from moving from a “high” impact of ad formats from “low” doubles the ad rank. Some manipulation brings us to:

Implying that the impact of ad formats on ad rank is greater than the impact from Quality Score, which seems a bit over-stated in my opinion.

We suspect that Varian’s script wasn’t checked by his engineers as we see discrepancies in the numbers between minutes 5 and 6 once we normalize them for their bids.

For instance, consider the advertiser with $2 bids. His ad ranks don’t balance when you normalize:

The advertiser with $3 bids:

And the one with $1 bids:

It’s odd that he gives very specific numbers for bids that don’t translate correctly to ad rank, but it’s probably more a marketing video than anything else. Google gets lucky with how few people actually sniff these numbers for accuracy.

Another reason I’m skeptical of Google’s example is that it doesn’t match up with our own customer data – for example you can look in your own AdWords accounts and see how CTR varies for ads with and without extensions – we did that a while ago and noticed that the use of ad extensions does indeed raise Click-Through Rates, as shown here:

And those ads with extensions raise Quality Scores too, as shown here:

In both cases you can see that there is indeed some uplift, but that it is modest and nowhere near as massive as the Hal Varian examples would have you believe.

For now, I suspect there’s a bug in the example and I’d hope that Google would correct it. (Thanks to Mark Irvine, our resident data scientist, for help with the equations here.)

User Device Is Taken Into Consideration When Computing Quality Score

This is 100% true and validates our own internal findings.

When looking at our customer accounts, we found that the average Quality Scores were similar regardless of the mobile share of account clicks, even though we noticed the expected CTR for mobile was very different from expected CTR for desktop. So the key takeaway is that Google uses lower expected CTR numbers when calculating QS from mobile.

They’re Still Pretty Vague On How Quality Score Is Calculated

Google says:

Pay Attention to the “Big Three” Component Parts of Ads Quality: ad relevance, expected CTR and landing page experience.

This is true but they’re listing them out in a table as though they’re all equally weighted factors. Just beating expected CTR trumps all other factors by far. They should make this more clear. The old QS video had a pie chart showing the components of quality score where 2/3ds of the algo was based on CTR – I thought that was a better way to explain it.

Summary

They’re not wrong, but they aren’t telling you the whole story either for obvious reasons. The level of detail in the new white paper is higher than previously, which indicates that they were leaving out key information. I can only assume that the new video and white paper also omit key details. So when it comes to Quality Score (still the most important metric in your account), I think you’d be better off doing your own homework, as I have done, than taking the Google advice verbatim.

About The Author

Larry Kim is the Founder of WordStream. You can connect with him on LinkedIn, Twitter or Google+.

Frantisek Borsik

"If you do not take risks for your ideas you are nothing. Nothing." N.N.T. | #LibreQoS & #bufferbloat :-) PS: Bandwidth is a lie!

10 年

Brilliant article, Larry Kim! Thanks for all the informations here...

回复
Ozair Akhtar

Digital Marketing Analyst & Strategist | SEO/SEM PPC Expert | E-commerce Growth Consultant | Social Media Marketing Expert | AI & ML/DL Enthusiast | Data Analyst | Data-Driven Insights | x Alibaba Group

10 年

Google is making rapid changes for the SEO's and now I am looking more towards SMM and SEM now.....

Low CTR = Higher RPM . Google indulges in all kinds of unfair practices to make sure their RPM keeps growing.

回复
Neil Mammele

Head of Growth at Sworkit Health - Digital Health | Health & Wellness Business Strategy | Sales | Marketing | Strategic Partnerships

10 年

Awesome read. Quick question about running branded campaigns that flank your non-branded ones. What kind of budget allocation will you work with on the two? Ex: A hospital and their Oncology department. Would a 70/30 Oncology/Branded split suffice? Thanks again!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了