The top 10 errors in the Google Ads campaigns that Sembot found
Lukas Lochocki
PRO-impulsando eCommerce con Paid Media ?? | CEO & coFounder at PRO-impulsa.com | Agencja.com | Sembot.com
In the last two months we have analyzed more than 100 Google Ads accounts with a movement from several hundred to more than one hundred thousand euros. We have found several things that interested us, which significantly impacted the profitability of marketing actions. Significantly, most accounts had serious errors that could be eliminated with just "one click".
We want to present our subjective list of ten things that you can easily revise - no matter the order.
Several items in the product ads correspond to the majority of the revenue, while consuming a small fraction of the budget.
The point in the graph is a single type of element. You can see that this client has a single income responsible for the result of the entire campaign.
To check if this problem appears in your campaigns you can choose Shopping Campaigns in Google Ads, option in the "Products" side menu, add columns (conversion value, conversion value/cost) and check the results.
Unfortunately in Shopping Campaigns in Google Ads we don’t work at the level of a single product, but a set of products that we can reduce (in most cases are all products or brands/categories in the store).
This problem can be solved using the so-called SPAG architecture (SKU), that is, Single Product Ad Group. This method requires creating an ad group for each product, product advertisement and choosing a corresponding product with its identification. It’s a time-consuming job, and it isn’t easy for businesses with a wide range of products if automatic generation and synchronization of the campaign is not used (we use our own generators of SPAG campaigns and precise text based campaigns in the list of products).
SPAG architecture allows to significantly increase the profitability of the campaign, since the cost per click (or CPA/ROAS), we establish it at the product level and not for the whole set of products, which obviously have other margin amounts.
Lack of parameters in the feeds for product ads (PLA)
A large part of the customers had "weak" feeds of the products. With lazy I mean the lack of parameters such as categories and other fields, with a name built incorrectly.
The lack of parameters has the effect that the ads aren’t shown in the search results added. The aggregate results are those that describe the product (group of products) with precision, but don’t specify its variant. If several advertisers have poduct feeds for a consistent search, Google will show a list of products, a drop-down list with options to choose from, and will skip advertisers with an incomplete structure.
Explore the details of your products (in Google Merchant or directly in Google Ads) and check if you have filled in the fields such as "color", "product_group_id" (for variations), "capacity", "size" and others.
Lack of application exclusions in display ads (and other destination sites) for remarketing campaigns and brand campaigns.
We have observed that exclusions in display ads are often poorly managed. Very few accounts had a regular and conscious management of the exclusions of the destination sites.
The fundamental problem was the lack of exclusions for mobile applications. We have seen accounts where mobile applications consumed up to 99% of the display budget. Of course, generally there are no conversions of this.
It is worth noting that the management configuration (of the interests, themes) doesn’t change anything in this aspect, the announcements in the applications appear anyway.
Most of the accounts used display only for remarketing and the thesis immediately appeared that we don’t care what our client visits. Totally agree. Unfortunately, several types of bots collect cookies from our websites, entering our ads by clicking on them.
Here is an interesting example (during the rest of the month there were not only 0 clicks, but a few dozen, the jump scale disturbs the graphic - a remarketing campaign):
And so the details are presented:
The campaign had been running for years, the participation of the mobile application has always been minimal.
Of course, the action was reported to Google along with server logs, but it was noted that nothing indicates the abuse.
It is worth mentioning that the fact took place with the option to optimize the number of impressions per day for each user, so we recommend setting the limit (for example, max 3)
Unfortunately, at the end of 2018, Google eliminated the possibility of disabling applications for targeted ads on mobile devices. Beginning with the words "To simplify ..." talk about it:
We recommend anyone who has no explicit purpose to completely exclude the range of display ads on mobile devices.
You can do this by entering the details of the Display Network campaigns> Settings> Additional settings>Devices
If for any reason you are interested in leaving this channel (mobile display), you must create campaigns directed to specific media with a configuration that is independent of the configuration of the print number per day per user.
You can exclude play in content settings, Campaign> Settings> Additional settings> Content exclusions. Unfortunately, this will not exclude for example the application of flashlight and other similar.
We recommend you check where your ad is showing, in order to see it you have to go to: Display Network campaigns> Locations> Where the ads have been shown
A low quality score of keywords in the Search Network
Quality Score
Most of the accounts we've dealt with had an average keyword quality score of 5 - 5.5 out of 10 (and often, even lower). This means that clicks on the Search Network are almost twice as expensive as if the result were 10/10.
In general, it’s possible to obtain a 10/10 result, provided that the keyword is in the title of the ad and on the landing page (ideally if it were in the title of the page, the main title, url). The result 10/10 is obviously a difficult goal, but the average score of 8/10 already saves significant resources (33% less expenses compared to 5/10).
In addition, a higher score of the quality score affects the AdRank thresholds that are responsible for whether the ad will be displayed or not (that is, it affects the reach).
You have to look at Quality Score through the prism of the impressions of the corresponding keywords and not the number of keywords with a given result (which gives more than most keywords have a result 10/10 if the majority of the traffic get sentences with score 5/10).
In our analysis we have adopted as a result:
The average quality score is the keyword's quality sum multiplied by the number of impressions and the total divided by the number of total impressions of the keywords.
The average result for the analyzed accounts is around 5.25, so the analyzed accounts pay more for the acquired traffic.
Architecture
With the previous section, the result of low quality score is usually associated with the incorrect construction of advertising campaigns, which prevents precise management at the level of keywords.
To have total control over the impressions of the ads, the SKAG - Single Keyword Ad Group architecture is used, which requires the creation of a separate group for each keyword (each group contains different types of agreement). Building an account in this way we have ads strictly related to a certain keyword (high correlation for quality score and high CTR) and accurate choice of destination site (high conversion and again high quality score).
The above solutions can be overlooked through the DSA campaigns or "keywords insertion", but the only justification for doing this is the lack of time.
Bad architecture makes it difficult to achieve an above average decreases conversion, CTR due to the low correlation of the announcement, landing pages with the keyword, also raises the CPC under AdRank due to a low result of quality score and in some cases reduce scope for not exceeded the threshold AdRank.
Dozens of keywords per ad group, one ad, lack of extensions
This is the development of the section above, usually we find a situation in which the totally unrelated keywords were in an ad group (and this ad group didn’t have ads with dynamic insertion of the keywords).
Apart from the increase in CTR, the extensions increase AdRank, which affects the cost per click and if the ad itself will be shown.
Many accounts had only one ad created for each ad group. Less important is if we believe that if we change a phrase in the content of the ad will significantly affect the conversions, more important is that the Google system expects from us several ad creations in each ad group (we recommend at least 3).
Broad match
Unexpectedly many accounts used a broad match. This solution is extremely bad and ineffective. Remember that the value next to the keyword Broad match is the sum of all the words that have been shown using (for example, according to Google broad match word for word bed mattress is ...). Word of this type has a result in the level of quality being the average of total quality level for the words in which the advertisement appeared. The effective management of this keyword is unnecessarily difficult work (you can add thousands of exclusions, but for what?) And provides much worse results than using for example SKAG architecture.
The only historically sound reason to use broad match was to obtain a list of potential keywords for the campaigns of precision, but nowadays it’s much easier, faster and more efficient to do it through a DSA campaign which based on the Website content alone builds keyword base and adequate reach.
Do you have words in broad agreement (no brackets, no double quotes or the plus signs - you only have keyword content written)? Check what phrases are displayed. Checkbox next to the keyword in the keyword list, a sufficiently large period of time, preferably a maximum range of dates and select "search terms" from the blue bar that is over the keywords. The results can surprise you.
Lack of supervision over user search and addition of search terms to keywords or exclusions
In the section of the Search Network>Keywords>Search terms - we have the ability to see what users write when they search. In the filters we can select an option so that the system shows us only the keywords that don’t have a match type (Filter>Match type>none - that is, show keywords that aren’t in the campaigns and are not excluded at the same time.
The cost relationship between these keywords and all the keywords in the search network shows how accurately we collect traffic in the Search Network. In other words, how exactly our keywords intercepted the user's query.
Of course, it’s impossible (and it doesn’t make sense) to capture all the phrases entered by the user, the problem begins when these phrases are not captured we have 80% of our total budget, and these results have happened (or the ads were shown to the phrases that had nothing to do with the offer).
In our opinion 20-30% of the sentences that do not match in the Search Network is an acceptable result, but unfortunately, of normal they are around 70-80%.
Lack of Mobile/Desktop segmentation
Both ads on the Search Network and Shopping ads behave in a completely different mobile devices and computers manner.
Most accounts had no cost adjustment between these devices.
Personally we believe that desktop and mobile campaigns have to be divided into two independent campaigns (usually identical, ie one with the bid adjustment -100% for desktop and the other with -100% for tablets and mobile).
Leaving all types of devices in a campaign leads to similar CPC bids on all channels. When separating mobile devices, they usually cost twice as much per click as for desktop computers in a similar position (for the position on the list of results). They also have a higher CTR and mostly a lower conversion percentage (although they are systematically approaching the desktop CR).
Mobile/desktop devices segmentation often allows you to save up to a dozen percent of our budget while keeping most of the revenue.
Misuse of remarketing lists
We've noticed that many accounts have active remarketing lists for search campaigns (RLSA). This is fine, you just have to see if remarketing doesn’t apply to most of our conversions.
This is especially important if we have a strong brand and we gain customers through other channels besides Google Ads.
It’s worth thinking if it is better to invest in display remarketing for a few cents per click than to noticeably increase the bids of expensive queries in the Search Network.
There are many strategies to use RLSA, they are very specific depending on each case. We have found many strange things in this area that are worth analyzing: If RLSA for brand phrases makes sense? Are we acquiring new or "our", or is it an intentional action? Unfortunately there are no universal answers.
It’s worth noting that the groups of recipients can have a negative bid adjustment, that is, we can exclude an ad in the Search Network for a user who for example was on our website today (or was there 100 times in the last week).
Another aspect is the selection of totally incorrect management. Most times you are chosen as recipients "visited the site". Generally, much more efficient is to choose they saw 3 subpages, they have been in it 5 minutes, etc.
User location, age, gender
Location
Excluding the location or matching the bid with the user's location makes sense, first of all because of the scope of the offer. This was correctly in most accounts.
In contrast, there was something else that was wrong, in case of Poland some cities get quite different results than the rest of the country, much higher CTR, baskets and higher CPCs. Most of the accounts at the national level didn’t have profiles looking at the city of the audience. The ROAS results for large cities and provinces could be differentiated by 100%. It is worth observing the location and drawing conclusions from it.
At this time, Google allows analysis even without active observation of the location in the campaign.
Click on Campaign>Locations>Geographic Report, and add column of the group conversions.
Age and gender
It’s worth stopping to think if it makes sense to guide the advertisement to the entire population. For most accounts a good idea is to exclude young people, seniors and above all, undefined people, that is, those about which Google doesn’t know anything, since they are usually people who talk badly.
To be able to see the data on the effectiveness of age groups, we must select:
Campaign (or all campaigns)>Demographics>More>Combinations, and select age and gender. For the table add Conversions>Conversion rate; and other options.
Based on Google strategies (ROAS, CPA)
Many campaigns are based on automated Google strategies. These strategies aren’t bad, in most cases they get considered results. Their problem is that they limit the scope of the campaigns. The CPA / ROAS campaigns tend to have a percentage in impressions of 40% lower than analog, well-profiled CPC campaign with almost identical results (CR, average CPC).
Which means that if you want to launch a profitable campaign, the automated strategies are fine. If we want to increase revenues (and of course profits), it isn’t a good path at all.
One of the main problems of automated campaigns is the total lack of control over the behavior of campaigns. This can be improved by deep segmentation (especially in the case of PLA using the SPAG architecture). For the search it’s good to divide the campaigns for the devices, location and other observed rules, so as to configure the ROAS/CPA bids depending on the segment (in theory the same does Google, but this way we only have a shared budget).
Small scope, the standard way to spend the budget, weak position and low CTR
Low quality score, low AdRank, use of Google's automated strategies, the budget selected as standard instead of accelerated makes the campaigns have small scope compared to the possible cost per click/conversion. It means that campaigns receive only part of the possible conversions that they can get from Google Ads.
Most of the campaigns we analyzed had a percentage of impressions below 40% despite having a budget for a higher percentage.
In general, obtaining a result of 70-80% is possible without further costs per click. This means that it is often possible to increase prints by almost 100%.
The impressions must be translated into click and the percentage of CTR clicks is responsible, which in turn depends on the quality of the advertisement and its average position (and this depends on the result of the quality and the price per click).
Most of the analyzed accounts do not worry about increasing the CTR (it doesn’t add more ads, it doesn’t try other different forms (short/ ong headline, names of links, etc.).
Often you can upload CTR in many percentage points, which means that only from this traffic and from the same position you can get more clicks that you will convert in the same way as the others.
Having the SKAG architecture it’s easy to analyze CTR of individual keywords in relation to the CTR expected by Google in a given context.
It’s also worth analyzing the impact of the position on the cost per click and the impact of the position in the CTR. Google Ads doesn’t have this option, but nevertheless, with the help of our tools for example, you can follow this. In most cases positions 2 to 4 have a very similar CPC but the second position has a higher CTR. We must monitor it since the loss of the impressions may be 50% (reach 40% vs 80%) and the double that we are losing by low CTR (that is, we achieve ? possible clicks for certain keywords).
Of course, each case is different and the result also depends on the competition that changes over time. Definitely worth analyzing, testing and constantly improving.
In conclusion
In the near future we plan to finish the work in fully automated tools for such analyzes. If you are interested in performing a semi-automatic analysis without any commitment to consult our certificate specialist, contact us.
#ads #sem #ppc #emarketing #marketing #googleads #adwords #googleadwords #roas #cpa #adspend
Freelance PPC/Google Ads Expert / Ex-Google
5 年Enric Redondo