Why Does Digital Marketing Appear to Perform So Well and Fraud Appear So Low?
Over the last ten years, I have warned of ad fraud and written about it with data and case studies. But until very recently it had been tough convincing advertisers that fraud was real and that it affected them. It certainly didn't help that trade associations like the Association of National Advertisers published press releases every year (note, I don't consider it research) saying fraud is low (less than 1%) and their programs solved it. Advertisers wanted to believe that fraud was low, so it was easy to believe what the ANA said. What I was telling them, however -- that fraud was high -- was hard to believe. So most chose not to believe it, despite the data that I showed. They would even point out that their digital campaigns appeared to be "more performant" than any other form of advertising they've ever done.
This article will show you why digital marketing appeared to perform so well and fraud appeared so low, and why digital marketing was not actually performing well at all and fraud was not actually that low, for the last decade.
Bots, and fraudsters, are good at avoiding detection
When I mention "ad fraud" most people think of botnets. That's right, but bots are only one form of fraud and one tool among many that fraudsters use to make money under false pretenses. One of the simplest forms of ad fraud is CPM ("cost per thousand") impression fraud, where bots just have to generate ad impressions to make money. There's also CPC ("cost per click") fraud. Bots are really good at their job, and they generate the 10s of trillions of bid requests and trillions of ad impressions every WEEK and billions of fake clicks. Of course they don't want this vast money-making operation to be stopped by the good guys, so over time, bots were made more advanced and better at evading detection. Not only are bots easily able to trick detection algorithms, they are also just blocking the detection scripts entirely (like humans block ads). This means the detection tech of the largest, most widely used vendors are not able to "see" them. Do you think the LACK of detection in this case means the impression was valid? Of course not. But when these vendors report 1% IVT, most advertisers are doing exactly that -- assuming that the other 99% was not invalid, and therefore valid, somehow. This is exactly what the ANA and advertisers have chosen to believe, since they have been citing these IVT numbers from fraud detection vendors for years.
YOU should actually think of the 1% as the vendors FAILING to detect anything wrong with the other 99%, as opposed to fraud being only 1%.
YOU should actually think of the 1% as the vendors FAILING to detect anything wrong with the other 99%, as opposed to fraud being only 1%. Would you buy a vacuum cleaner that doesn't suck? Of course not. So why are you paying for fraud detection that does suck, and only detects 1% of the fraud? That's like the vacuum cleaner picking up 1% of the dust. See: Why Black Box Fraud Detection Sucks
Fraudsters are falsifying reports and falsely claiming credit
Fraud verification tech companies, some of which are public companies, were originally created to detect IVT -- "invalid traffic" hitting websites. Over the years, I have lots of data that shows their tech is not good even at this basic task. What's worse is that their algorithms are not even looking for all the other forms of fraud that pervade digital marketing today. Having studied the problem of fraud for the last decade, I have seen more and more cases where others have finally discovered, documented, and corroborated what I have seen. These cases were not reported by these fraud detection vendors whose job it was to detect fraud, but by analytics professionals and reporters, like Craig Silverman, whose job wasn't even to detect fraud. See: 1) How Two Small Businesses Beat Ad Fraud, 2) Google Has Banned Almost 600 Apps For Pushing “Disruptive” Ads
For example, Kevin Frisch, the analytics practitioner at Uber, discovered that fraudsters were claiming credit for app installs that had already occurred, organically. That means humans installed the Uber app because they wanted to, not because they saw an ad and clicked on it. Fraudsters used a technique called click flooding to falsify attribution reports so it looked like they drove the app install. This way, they could claim credit for the CPI ("cost per install") that Uber paid out, under false pretenses. A human, reviewing the analytics, noticed something strange and dug deeper until he found the real cause -- fraud. If Kevin didn't do this, the campaign would appear to be performing really well (lots of app installs) and fraud would appear really low (fraud detection vendors didn't report anything wrong). This type of fraud continues to this day, and other advertisers not looking as closely as Kevin are still being ripped off -- they think the digital campaigns are working really well and fraud is really low.
Over the years, I have also encountered "performance marketers" who thought they were immune to fraud, because they only paid for "performance" or outcomes like installs, sales, or "foot-fall" in physical stores. I told them not to make that assumption. Here's why. As we saw above, the app installs were real. But fraudsters falsely claimed credit for having caused the installs. Uber paid them millions of dollars in cost-per-install ("CPI") marketing. Uber could have saved all that money paid to mobile exchanges fraudulently, because those app installs had already occurred. In fact, when Uber turned off $120 million of their $150 million, the rate of app installs didn't change -- the app installs continued -- which tells you that the app installs were not driven by the CPI campaigns. We now even have a documented court case where one of the vendors not only falsified the reports, they also fabricated reports where they hadn't even run any ads. See: One of Uber's Lawsuits Comes Full Circle. Since Kevin discovered and documented the fraud, Uber is three years into a lawsuit against 100 mobile exchanges for app install fraud, not detected or stopped by fraud detection vendors.
The same is happening with sales. How are fraudsters able to generate sales? Bots don't buy anything. They don't actually buy anything. The fraudsters are again just falsifying the records to claim credit for sales that had already occurred. Everyone has heard of affiliate fraud, documented for more than TWO decades by the great Ben Edelman, Fraudsters are using a technique called "cookie stuffing" to falsify affiliate reporting and claim credit for causing sales so they get paid the affiliate revenue share. In more recent examples, fraudsters (ad tech vendors) are going as far as falsifying the Google Analytics of their own customers to make it appear they drove the sale, when the sale had already occurred previously. I won't get into the technical details here; but if anyone is interested in how this was done technically, I can show you privately. Fraud detection vendors are not catching this because their tech can't see it.
领英推荐
Finally, how does a fraudster fake "foot-fall" in physical, offline stores? Simple, again by tricking the reporting to make it look like their digital campaigns caused it. By cookie stuffing vast numbers of devices, they make it appear that the device was "exposed" to ads; then, when the device "walks into a store" the reporting faithfully reports "an exposed device entered a polygon" -- i.e. "foot fall" in a physical store. Fraud detection is not catching this because they are not even looking for it.
So What?
Do we have independent verification that fraud detection didn't catch the fraud and digital marketing was not working as well as marketers thought it did? Yes. Remember the three additional cases below? P&G turned off $200 million of DIGITAL ad spend, and saw no change in sales activity. Chase reduced their "reach" (number of sites showing their ads) from 400,000 sites to just 5,000 sites (a 99% decrease) and saw no change in business activity. Airbnb cut $800 million in "performance spend," saw no change, and is reallocating a far smaller number to "branding" and awareness advertising, acknowledging they "over spent" on performance marketing and neglected to long term branding and awareness.
I am sure few remember the eBay example, written up in a 2013 Harvard Business Review article -- "Did eBay Just Prove That Paid Search Ads Don’t Work?" No, eBay didn't prove that search ads don't work; but they did prove that lots of money was wasted on paid search ads, when humans would have searched for, gone to, and bought something from eBay anyway. Ebay ran "a controlled experiment where they shut off all Google search ads in a third of the country, while continuing to buy ads everywhere else." Their findings showed "there was no appreciable decline in sales of eBay listings in the part of the country where Google ad purchases were shut off. People who thought to buy guitars via eBay were finding their way to the site anyway, either by clicking on natural listings, or by going directly to eBay’s site without using a search engine at all."
Hopefully the above inspires you to take a closer look at your digital marketing and not assume that it is working well and that fraud is low. Digital campaigns have appeared to perform really well over the last decade because of bot activity (lots of clicks) and fraudsters falsely claiming credit for outcomes -- installs, sales, foot-fall -- that they did not cause. Advertisers are paying "performance budgets" to fraudsters; by reducing this payout to criminals and ad tech companies, advertisers can substantially increase their bottom line profits (save the money they didn't have to be spent anyway). Digital campaigns also appeared to have low fraud, because current mainstream fraud detection tech companies' tech is really bad and also not looking for most of the forms of fraud.
P.S. You DON'T have to use FouAnalytics for any of the above; you just need to look more closely at your own analytics and reports, and question everything, especially the stuff that looks weird or "too good to be true." Hat tip to Alex Giedt for showing me the following BS submitted by an agency this month, proclaiming how awesome the digital campaigns performed in Q4 of last year. Yay, "9,584% ROAS!"
P.P.S. This is what ANA puts out "ANA To Ad Fraud: Drop Dead, We're Winning" (May 1, 2019), New US Fraud Benchmark Study Finds Record-Low IVT Rates in TAG Certified Channels; Advertisers Express Comfort with Sustained 1% Fraud Rate (November 4, 2021). Does anyone still think they know what they are doing or saying? And then this Really Bob? Really? Another Supply Chain Transparency Study?
Helping Embedded Engineers shoot data into the Cloud
2 年"Would you buy a vacuum cleaner that doesn't suck? Of course not. So why are you paying for fraud detection that does suck?" Augustine Fou that's a real piece!
Vice President Marketplace @ Paradigm | MBA, New Business Development
2 年Thought you might enjoy the irony, Augustine. On the same day as your post. I think you deserve some amount of credit shining the light on this issue and for the ANA response. Your thoughts? https://adage.com/article/podcast-marketers-brief/ana-plans-investigate-programmatic-media-buying/2396651?utm_source=ad-age-cmo-strategy&utm_medium=email&utm_campaign=20220202&utm_content=hero-headline
FouAnalytics - "see Fou yourself" with better analytics
2 年for anyone saying targeted display ads work, did you notice the words above? the phenomenon is called "banner blindness" and has been studied for more than two decades now https://www.google.com/search?q=%22banner+blindness%22
AVP Data & Analytics @ HSBC | Performance Marketing & Customer Experience Expert | MMM & DDA | CLCM | Digital Analytics
2 年Well written article Dr. Augustine Fou - Ad Fraud Researcher. My ask is that is there a tool like Google Analytics, which can tell me how many humans are coming to my platform? The reason is simple, small companies can't afford expensive tools and rely on Google Analytics as it is free.
Writer at Terryfic Writing
2 年Very well written article Dr. Fou