Legacy fraud verification didn't just fail to detect, they failed to measure

Legacy fraud verification didn't just fail to detect, they failed to measure

"Trust us, we're accredited" or "Sorry, we can't reveal any details of how we measured the fraud because we don't want the bad guys to find out."

We've all heard that before, right? Right, from the legacy fraud verification companies when they are asked about their detection, or failure to detect, as it were. Eleven years ago, I decided not to become a fraud detection tech company because I realized the inherent conflict -- those companies would rely on fraud to continue so they can keep making money. I am a digital marketer first and I want to solve ad fraud, not prolong it. I called myself an ad fraud researcher and used my own tools to help advertiser clients audit campaigns and mitigate fraud. I didn't trust anyone else's tech or their data. So I built my own tools. For the first 8 years, I didn't let anyone else login and use the tools. I screenshotted from the platform and delivered my recommendations via powerpoint.

But in 2020, I gave it a name "FouAnalytics" and opened up the platform for others to use themselves. My job was to teach others "how to fish" so they could become self-sufficient in understanding how to look for fraud and mitigate it themselves, in their specific circumstances. Over the years, I have seen hundreds of cases, almost all of them were different in their own way. In most of those cases, the advertiser was already paying for fraud detection, but were still not protected. In fact, the advertiser was misled to believe that fraud is low -- around 1% -- not because fraud was that low, but because the verification vendor failed to detect it. It is entirely incorrect to assume that IVT ("invalid traffic") is all the fraud there is, but that is exactly what the legacy vendors' tech is tuned to look for -- bots, fake traffic -- so they are missing all the other forms of cheating and fraud that are not bots.


The failures of legacy fraud verification

By SO severely under-reporting the fraud, not only have these vendors misled their own customers, their failures have led to a decade of explosive growth for ad fraud. Trade associations, with no tech of their own, no data, and very little knowledge cited these vendors' reports in annual press releases, further fueling the cover-up. Not only had these vendors misrepresented the efficacy of their products to their own customers, their bad tech has led to a decade of harm to advertisers pouring money into digital, and to real publishers who saw a decade of declines in ad revenue and margin compression. By not labeling outright fraud sites to be more than 1% fraud, the legacy verification vendors put them on the same footing as legitimate mainstream publishers, who they also labeled as 1% IVT.

If both fake sites and mainstream publishers' sites were 1% bots, who would blame advertisers for choosing to buy the cheaper option? This specific failure of the legacy verification vendors -- the failure to detect more than 1% in all cases -- directly led to ad dollars flowing away from legitimate publishers to programmatic channels, chasing low CPM inventory who they thought were of the same quality as mainstream pubs' ad inventory.


False positives, incorrectly blaming good publishers

Years ago, I had to help mainstream publishers being falsely accused of high IVT by these verification vendors, due to incorrect measurements. Esquire.com, reuters.com, foodnetwork.com, etc. did not have 70+% of bots/IVT on their sites. But yet they were marked as high IVT and their domains were blocked by these same vendors. This, and subsequent examples, revealed that these legacy vendors took the domain passed in the bid request and assumed it was the domain on which the ad ran. In the example above, the fake site was pretending to be esquire.com -- spoofing -- putting esquire.com in the bid request. The ad did not run on esquire.com. So these vendors incorrectly blamed the legitimate publisher and demonetized them. At the same time, they failed to prevent the ad and dollars from going to the fraudster that was pretending to be esquire.com.

No alt text provided for this image

Another example of this failure was documented in early 2022, when Adalytics exposed domain mismatches in billions of bid requests coming from sites owned by newspaper giant Gannett, over at least 9 months of time. For example, bid requests that originated from local newspaper sites like seacoastonline.com contained usatoday.com in the domain field instead. And vice versa. This was a technical glitch on the part of Gannett, but fraudsters deliberately do exactly the same thing -- put a different domain in the bid request than the fraudulent domain from which it originated. The failure of ALL the verification vendors to detect the Gannett mistake means they continue to fail to detect simple domain spoofing (incorrect domain in the bid request). How is this possible? It's because IAS and DV do not run javascript tags to detect the domain and confirm that the domain declared in the bid request matches it. They just accept the domain in the bid request as the domain the ad will eventually run on. Obviously, this is incorrect.

Still don't believe me that something SO simple was still missed by these vendors that got millions of dollars of venture funding and are now worth billions as public companies?

Remember Sportsbot from 2017 or 404bot from 2020? Would you also believe that there was no giant botnet hitting publisher pages in either case? It was all spoofed domains passed in bid requests. The data scientists from the first verification firm saw tens of billions of bid requests with domains from sports teams or sites, like DallasCowboys.com, NFL.com, MLB.com, ESPN.com, etc. Those were all obviously fake, but there was no bot of any kind on these websites? How do I know? I had FouAnalytics tags on several of the major sites and noted only 1 - 3% bot traffic. The point is that no giant botnet is required to load those pages in order to generate bid requests with those domains in it. Python scripts in a data center can simply assemble bid requests with 10)% falsified data to send into the ad exchanges seeking bids from advertisers not protected by their pre-bid verification vendor. The exact same error occurred in the 404bot example. The reason it was called 404bot was because the 404 error in server-speak means "page doesn't exist." The fraudsters were passing page urls in the bid request (to make it seem legit because of the specificity). When interns at the verification company tried to visit those pages, they encountered the 404 error. None of those pages existed on the mainstream sites like marthastewart.com, because they were entirely fabricated by the fraudster and passed in the bid request. The fraud did not occur on marthastewart.com and there was no giant botnet hitting marthastewart.com. These vendors didn't understand how the fraud occurred so their interpretation and explanation of it was incorrect. When pressed on this, they fall back on "trust us, we're MRC accredited."


False negatives, failure to measure or detect

So, is all this crap still happening? Surely, these verification vendors would have updated their detections given years and millions of dollars of funding and revenue. Alas, there's no evidence that their tech has gotten better or gotten to work, at all. Last month, Adalytics exposed Google's YouTube had been selling video ads, misrepresented as TrueView ads. The video ads were not-skippable, muted, off-page, and sound-off. The vast majority of this problem occurred when the ads ran outside of YouTube on video partner sites and mobile apps, collectively called GVP ("Google Video Partner" network). When caught, Google denied it AND hid behind IAS and DoubleVerify, saying "look, these vendors said the ads were viewable, audible, and not IVT." The important point that came to light as a result of this latest scandal was that the verification vendors were not truly independent. In fact, they were not "independent" at all, because they were given the data by YouTube, through ADH ("Ads Data Hub") and merely "performed calculations" on the data and "provided reporting" on viewability, audibility, and fraud. None of these vendors measured anything on YouTube or GVP with their own javascript tag, so this was not independent measurement. It was a deliberate cover-up, with the illusion of outside vendors providing the reporting, even though the data was entirely supplied by YouTube. This is problematic. Most advertisers now realize too that Google is a paying customer of these verification and viewability vendors, so these vendors are not going to embarrass or upset their own customer by revealing problems like video ads that ran muted, off-page, auto-played, non-skippable, etc. In this case, I don't even blame IAS and DV for these failures to detect, because they were given data that was insufficient to detect the issues.

But I WILL blame these vendors for assuming that "no data" equals "no fraud." In this case, the failure to MEASURE is the greater sin than the failure to DETECT. What the heck do I mean? Let me show you, using their own reports.

Note the first row. The "delivery site" is marked as "N/A/" That means this vendor did not know where the ad actually ran. Despite this, they labeled the entire row as "100% fraud free" or "0% fraud." This is incorrect. The second thing to note in the spreadsheet above are the columns "measured impressions" versus "monitored ads." Do you know the difference? Have you ever seen this in the reports they send to you? Probably not. Most of the time, these vendors report the column called "monitored ads." In this example, there's 781 million, nearly 800 million ads monitored. "Measured ads" means they actually ran a javascript tag to measure for fraud. Note the number here is 83 million. That's 11% of the nearly 800 million ads monitored. To be clear, this vendor only measured 1 in 10 ads with their javascript tag. The other 9 in 10 ads were not measured with a javascript tag. Sounds kinda like the YouTube example where they measured ZERO ads with a javascript tag of their own. Even though they failed to MEASURE 9 in 10 ads, they still reported to their customers that all 781 million impressions were 0% fraud. Do you now see why I said the failure to measure is a greater sin than the failure to DETECT? To me, this is tantamount to lying to your own customers.


What next?

These failures are not just recent failures; they have been consistent failures for the last decade -- what I call the "lost decade of digital marketing." But these failures have come under direct sunlight more frequent more recently -- like what about all those MFA sites and the 21% of programmatic impressions these vendors failed to alert or protect their customers from?

Even though I have shown countless examples publicly and privately to clients that these vendors' tech sucks, it's hard to believe. So let me suggest asking them the following far simpler questions:

  1. how many ads did they actually measure with javascript tags
  2. how many ads were you billed for

For question 1, if they measured 1 in 10 or 1 in 100 ads, and failed to measure 9 in 10 or 99 in 100 ads, they should NOT be reporting only 1% fraud to you. They cannot assume that what they didn't measure was not fraud, knowing how agile and advanced bad guys are. And if they didn't measure it, why should the bill you for it? There's enough evidence over the years that these vendors overbilled customers, to the point that agency holding companies are going back to audit what they were billed compared to how many ads even ran. In your own case, ask them how many ads you were billed for. One of my clients was billed for 6 billion impressions by their verification vendor when they only ran 4 billion impressions total last year. To make matters worse, this vendor only measured 1 in 10 ads with their javascript tag. The advertiser plans to sue the verification vendor to get their verification fees back. But consider the media cost wasted and the entire year wasted -- ads and dollars going to fake sites and apps that the vendor failed to mark as fraud. The advertiser didn't realize their ads were not shown to humans and they didn't get any business outcomes from all of the ad spend, even though these vendors continued to tell them there was low to no fraud.

What will YOU do, given the failures of the verification vendor you've been paying for years? If you need assistance. please direct message me.




Dr. Augustine Fou

FouAnalytics - "see Fou yourself" with better analytics

1 年

advertisers, what are you paying legacy fraud verification vendors for? are you getting what you paid for?

回复
Hakim Dyadi

Digital Marketing Consultant, "ad fraudless" minded.

1 年

You masterfully nailed it !

回复
Benjamin Morgan

Senior Quantitative Portfolio Manager for Accredited Investors. Data Analyst. A.I. Engineer.

1 年

You're the Jim Chanos & Robert Shiller of the Ad World. Much love Dr.

Dr. Augustine Fou

FouAnalytics - "see Fou yourself" with better analytics

1 年
回复
Jacques Warren

Dernier droit - Last Stretch

1 年

Dr. Augustine Fou - Ad Fraud Researcher You have discussed how complacent some marketers are towards their wasted budgets on fraudulent ads and how media agencies wash their hands with the likes of IAS. Then who need we to convince to use Fou Analytics if neither really care? Unless, thanks to your tremendous efforts in the last decade, more marketers now want to involve an independent third party.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了