Ask not what % went to bots, ask what percent was shown to humans

Ask not what % went to bots, ask what percent was shown to humans

For the last decade, digital marketers have paid for fraud verification in hopes of avoiding bots and fraud. But the legacy fraud verification vendors they used couldn't detect most of the fraud. How is this possible?

1. Bots blocked their detection tags ("verification stripping")

The code snippets below were captured in 2013. They show bots actively looking for Moat tags (domain = moatads.com) and IAS tags (domain = adsafeprotected.com) and stripping out the tags to avoid detection. If these vendors don't have any data, they could not mark the bot as IVT ("invalid traffic"). That's why the IVT percentage they have reported for the last 10 years has been so low, around the 1% range.

Even DoubleVerify mentioned the "well-documented occurrence of verification stripping" in one of their press releases: https://ir.doubleverify.com/news-events/press-releases/detail/190/doubleverify-exposes-viperbot-a-new-global-fraud

These low IVT numbers are what has been cited by trade associations like the ANA ("Association of National Advertisers") and TAG ("Trustworthy Accountability Group") in press releases for the last 8 years - table below.

After I told them these numbers were incorrect and severely under-reported, they tried to publicly discredit me. See these tweets from Mike Zaneis the CEO of TAG (the certification body created by the ANA, IAB, and 4A's) to hand out "certifications against fraud."


2. Bots easily reverse-engineered their detections

Bots not only blocked their detection tags from loading ("verification stripping") they also easily reverse-engineered their detections in order to avoid getting labeled as invalid traffic. How is this possible? In the slide below, you can see urlscans of FouAnalytics, IAS, and DV tags. As you can see the FouAnalytics tag (leftmost) is obfuscated and does not include the detection code in the clear, like the IAS and DV tags (middle, and rightmost), respectively. By leaving the detection code in the clear, it makes it easy for botmakers to understand what they are looking for when detecting bots, and therefore also easy for botmakers to put in place workarounds to avoid or trick these detections. That's why the IVT numbers reported by these vendors have been so low for so long. It's not that the bots were not there; it's that the bots were more clever than the detection tech and avoided getting marked as IVT.

FouAnalytics has cybersecurity measures built in to prevent bad guys from tampering with the code or figuring out how we detect bots. For more information, have a look at the following article.


3. Undisclosed sampling means they missed the bots that were there

What if your vendor was sampling 1 in 100? That means they didn't measure 99 in 100 ad impressions. Don't you think it's easy for bots to go undetected if they were in the 99 out of 100 impressions that were not measured? Right. Of course. Why didn't advertisers know that this was happening? It's because advertisers were paying "full bore" for all the impressions, whether they were measured (with a javascript tag) or not. How do you know if this is happening to you? Simple. Just ask your verification vendor to show you a report that has BOTH quantities: 1) the quantity of impressions you paid them for, and 2) the quantity of impressions measured with a javascript tag. If this ratio is 1 in 10, or 1 in 100, or 1 in 1,000, then you understand that bots could be hiding in the other 9 in 10, 99 in 100, or 999 in 1,000 impressions and go undetected as IVT. That's why the IVT percentages reported by legacy verification vendors and parroted by trade associations has been low for the last 8 years.

Advertisers, agencies, and publishers are now realizing that these vendors' failure to detect fraud and brand safety problems have been there all along; and they have not been protecting their customers from fraud and brand safety problems.

The failures of the legacy verification vendors have been well documented for years.

2024 - https://www.wsj.com/business/media/brands-paid-for-ads-on-forbes-com-some-ran-on-a-copycat-site-instead-c01609ef

2023 - https://www.wsj.com/articles/google-violated-its-standards-in-ad-deals-research-finds-3e24e041

2022 - https://www.wsj.com/articles/ad-tech-firms-didnt-sound-alarm-on-false-information-in-gannetts-ad-auctions-11651665602


So what?

If you now understand how it's possible for the legacy verification vendors to miss the majority of the bots and fraud that WAS there all along, you might be wondering what you can do about it. I am not here to pitch you FouAnalytics. In fact, I have said repeatedly, you can solve most of ad fraud for free yourself and without using FouAnalytics, see:

But let me show you how FouAnalytics can help you do even better than just avoiding the fraud. As I said at the beginning of this article "For the last decade, digital marketers have paid for fraud verification in hopes of avoiding bots and fraud." Once you have adopted best practices like 1) turning off audience networks, and 2) using inclusion lists, etc. you would have avoided 90% of the fraud, at no cost -- i.e. you don't need to pay for verification vendors, that didn't detect the bots in the first place.

Once you have minimized the red, you can then move on to maximizing the blue (humans, in FouAnalytics). Note that the legacy verification vendors don't measure for humans; they just measure invalid traffic. But if it's not IVT, you can't just assume it's humans. You have to actively measure for it. FouAnalytics was built from the ground up detecting both bots (dark red) and humans (dark blue). You can use it to optimize away from fraud and bots AND you can use it to optimize towards humans. The following chart shows a perfect example of a CPG advertiser optimizing towards humans, more dark blue, systematically, and showing that with FouAnalytics.

FouAnalytics OLV video ads measurement

With FouAnalytics, you can also optimize for high attentiveness. Attentiveness means the percentage of users that took further action when they arrived on your site. For example, the high attentiveness example below (left side) shows that 66 - 69% of the humans (dark blue dots) clicked something after they arrived on your site. In contrast, only 7% of the users (right side) arriving from a different paid media channel did something on your site (low attentiveness). Attentiveness already "includes" viewability and attention because the ad had to have been viewable, and the user had to have paid attention to it, to get inspired by the ad creative to click through to your site. Once they are there, the attentiveness percentage gives you clues about how well your ads worked.


"Don't just optimize away from bots; optimize towards humans too"

Happy Saturday Y'all.


Feel free to share out if you think someone else needs to see this. And for more case examples and screenshots from FouAnalytics, see and subscribe: https://www.dhirubhai.net/in/augustinefou/recent-activity/newsletter/




Matthew Sell

Strategic Marketing Leader | Comprehensive SEO, SEM & AI-Driven Growth Strategies | B2B B2C D2C | MBA

3 周

It's really important to figure out how much actually reaches people, especially as brands invest heavily in digital. Seeing legacy verification systems fall short is an eye-opener, and it reminds us in marketing that more transparent, robust solutions are necessary. For ad integrity to be truly protected and for real customer engagement to be maximized, there's a need to shift. Thanks for shedding light on this important topic, Dr. Augustine Fou FouAnalytics seems like a promising approach.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了