Benchmark 2022Q2 Results: Google
Oxford Biochronometrics
Intersecting e-commerce & cybersecurity to optimize customer acquisition costs
Hello?Google?my old friend, I’ve come to search with you again. Because some fraudsters softly clicking, left their leads while I was sleeping, and the contacts that were planted in my CRM still remain... within the scope of ad fraud.
The largest and probably best known search engine is Google Search. Besides its search engine Google is also one of the largest internet advertising companies. So, how well did Google perform in 2022Q2 when looking at click fraud and lead generation fraud?
This article will reveal how well three of Google’s sources did perform in 2022Q2. These sources are: DoubleClick for Advertisers (DFA) now known as Campaign Manager 360, Google Display Network (GDN) and Google Search. But, first, we’ll take a look at the aggregated (Google level) click(fraud%), lead generation(fraud%) and and conversion%. Next, the results will be broken down to each individual source. The last part will provide some background and historical information on why we see what we see and why others don’t see what we see, which explains why we have come to these figures.
Google (based on the combination of the 3 traffic sources)
According to?Oxford Biochronometrics?in total 17.62% of the clicks were flagged as fraud. The average and median click fraud% of the peer group are, respectively: 33.17% and 20.19%. The conversion% to a lead was 10.57%. Almost all leads (99.62%) were human, only a small portion was flagged as fraud (0.38%). The average and median fraud% of the peer group are, respectively: 23.64% and 2.42%.
Illustration 1. Donut visualizations for: The fraud% of clicks and lead generation. The 2 large donuts contain the group results, the smaller donuts the results per individual source. This is based on Google’s traffic (DFA, GDN, Google Search) in 2022Q2 as measured by Oxford BioChronometrics' SecureLead.
DFA (now known as Campaign Manager 360)
First of all, DFA has been the best scoring lead generation source in 2022Q2. Their conversion ratio was 20% and they were the 4th largest in total volume. A well deserved winner in the lead generation category!
According to Oxford BioChronometrics 13.01% of the clicks were flagged as fraud (ranked at 11). The average and median click fraud% of the peer group are, respectively: 33.17% and 20.19%. The conversion% to a lead was 20.01%. Almost all leads (99.98%) were human, only a few generated leads were flagged as fraud (0.02%). The average and median fraud% of the peer group are, respectively: 23.64% and 2.42%.
GDN
This quarter 20.19% of the clicks originating from GDN were flagged as fraud (ranked at 28). The average and median click fraud% of its peer group are, respectively: 33.17% and 20.19%. The conversion% to a lead was only 2.00%. This quarter 99.52% of the generated leads were human, the remaining 0.48% were flagged as fraud (ranked at 15). The average and median fraud% of the peer group are, respectively: 23.64% and 2.42%.
Google Search
This quarter 19.65% of the clicks originating from Google Search were flagged as fraud (ranked at 25). The average and median click fraud% of its peer group are, respectively: 33.17% and 20.19%. The conversion% to a lead was 8.31%. This quarter 98.97% of the generated leads were human, the remaining 1.03% were flagged as fraud (ranked at 24). The average and median fraud% of the peer group are, respectively: 23.64% and 2.42%.
领英推荐
Why is it important for you to know who your real visitors are?
Marketing campaigns run by our clients are based on the twin challenge of getting enough volume and configuring the campaigns to target the exact audience. If you set it too broad you will show your advertisements to a potential non-interested audience but you do get the desired volume, if you set it too strict you hit your audience spot on but eventually run out of people and/or your campaign causes ad fatigue to your audience.
Once the marketing team has found the perfect balance between volume and campaign configuration, they’ll need to be sure that the traffic is really human and that they are not paying for bots and fraudsters. The bigger their demand is for traffic the more difficult it will be for their traffic supplier (the source) to provide quality traffic, as eventually they will run out of humans. In case a campaign does not perform as expected, it will always be some guesswork why it didn’t perform as expected: Was it the creative? The campaign configuration? The landing page? or... ? And, even if you run one or more A/B tests within your campaign, on your lead generation forms, or at your web shop; it just isn’t always obvious. If you have 15% - 20% fraudulent clicks and the difference between the A/B test is within that range, how reliable is your result? And to whom, or better, to?what?are you optimizing?
Fraudsters on the other hand know very well when their scheme stops performing. Once they have been detected their income stream immediately stops and thus they will immediately start to tweak and update their code, or simply go to another BaaS (bot as a service) provider, in order to continue their business. To them it will be very clear why it stopped working.
Traffic originates from suppliers such as Google’s DFA, GDN, Google Search, etc. but it can also be mailing lists, floating QR code, comparison websites, etc. These suppliers are positioned between the clients (demand for customers) and the customer browsing the Internet (supplying these customers to their client). Depending on the media type they are renumerated either by impression, click, generated lead or by action. This causes a conflict in priorities, as they are paid by volume but the clients wants volume AND quality, not just volume. This is the classical principal-agent problem!
That’s why traffic is monitored and validated upfront by the supplier. But, evidently, when looking at the fraud percentages in each of the quarterly benchmark reports:?this isn’t enough! And it can be explained by something we call “the gap”. The gap is the time gap between fraudsters starting their fraud scheme and the detection of that scheme. And to be clear, fraudsters only start when they are absolutely convinced that they are not being detected, because to them it is a life or death scenario. Once started the gap exists up to the moment the fraud is detected and they are cut off.
If you have a poor or mediocre fraud detection this gap can be multiple months, or even a year, or even worse the fraud will never be detected. To show some evidence of this claim take a look at the following two illustrations.
Illustration 2. Two charts. On the left last year's situation (2021Q2) shows the % human clicks on the x-axis v. the % human generated leads on the y-axis.?On the right (2022Q2) shows the % human clicks on the x-axis v. the % human generated leads on the y-axis. The three Google sources have been colored red and are labeled. The size of the bubble represents the traffic volume. The unlabeled grey bubbles are other sources and shown for reference only.
Both illustrations show the percentage humans per click on the x-axis and the percentage humans per generated lead on the y-axis. As you can see, DFA scored poorly in 2021Q2 with only 2.18% humans. In this case a 3rd party verification (Oxford BioChronometrics' SecureLead) was needed to detect the fraud. Without that it would have been a big loss to our client, because traffic suppliers just don’t believe you when you claim that ~98% of their clicks were fraudulent. They will counter that with arguments like: Your campaign parameters were too broad, etc. But, in this case, based on the evidence a lot of money was saved and the best part is: Each next quarter the fraud percentage of the DFA source has been lower and their quality (conversion ratio) improved.
This example shows that, unfortunately, fraud detection is necessary but also shows that it doesn’t cause additional costs. On the contrary, it will save you a lot of incorrectly spent money, gives you the ability to act when fraud appears, prevents fraudulent leads to be followed up with all its legal consequences (TCPA claims), prevents your customer and campaign data to be polluted with random contact details submitted by fraudsters. And these are just the benefits in the lead generation world. In the ecommerce world it will prevent you from processing fraudulent transactions submitted by credit card testers, this will prevent chargebacks from the original card holders, and again this prevents your reputation as a merchant to the processing network, which again prevents a surcharge in transaction costs to cover the additional risks of potential chargebacks, and finally the most important one: The increase of your reputation as a merchant due to a lower risk of a chargeback will increase the transaction approval rate. As a merchant declined transactions are expensive, you have to pay the transaction fee and also lose the potential customer which now will go to your competitor. Ah, and before you start thinking: But… but... we do have fraud detection at the transaction level? We are aware of that. But, how well does that cope with fresh credit card and contact details data leaks sold on the darkweb? Yes, those checks are necessary but not enough to protect you from this type of fraud.
This all probably makes you wonder, why does this gap even exist? Why did you see what one of the biggest tech companies with more than enough budget to hire all the greatest engineers in the world, did not see? So, what’s the difference?Oxford Biochronometrics?fraud detection has? First, let’s exclude some methods and techniques we don’t use: We don’t use IP blacklists nor device fingerprints because fraudsters are able to rent residential proxies and circumvent IP blacklists and they have special browsers enabling them to change their device fingerprint at will, ie. at each generated lead. This burning through IP addresses and device fingerprints explains why the gap is real and exists. If you rely on a fraud detection only catching fraudsters reusing IP addresses and device fingerprints you’re playing with fire.
So, what is the difference?
When?Oxford Biochronometrics?was founded we didn’t know much about digital marketing, nor lead generation, nor the ecosystem where lots of companies make their money in providing digital marketing services. We only knew how to program in multiple languages, automate browsers, capture and analyze network traffic, reverse engineer code, build infrastructures, create bots and patch anything revealing their true identity, scrape websites at scale and bypassing other vendor’s bot detection, etc. That’s what we knew. Over time we learned how digital marketing worked and what kind of detection would be most useful to this digital ecosystem. Combine this with a lots of multi-discipline creativity and you’ll get a multi-layer detection mechanism similar to the human immune system: pattern recognition receptors which will recognize what is good or bad. We see the “micro expressions” or “tells” within each monitored session, even if the fraud scheme is completely new, it still is detected and flagged and of course if the detection is not 100% sure: it will be parked for human inspection. Try us and see fou yourself.
Disclaimer:
These results are based on analysis of data collected on behalf of our clients.?Other configurations and campaigns may produce different results based on a variety of factors, including, but not limited to different sample sizes, audience, targeted geo, seasonal effects.?We therefore make no claims regarding the overall performance of any particular traffic source.