Are you buying a vacuum that doesn't suck? Yes, You Are
Photo by No Revisions on Unsplash

Are you buying a vacuum that doesn't suck? Yes, You Are

Would you buy a vacuum that doesn't suck? Vacuums are supposed to suck up the dust and dirt; if they don't suck, they don't work. Would you buy technology that doesn't work as it's supposed to work? No, that would be a waste of money. But that is what digital media buyers have been doing for the last ten years -- they are buying digital ads with programmatic ad tech and buying other services like fraud, viewability, and brand safety detection. While some of the tech works, most of it doesn't work well, or at all. Some of the tech is literally "held together by bubble gum and scotch tape" as the saying goes. In this article I will show examples of the limitations of the tech. I won't even get into examples of deliberate falsification of the data or ad fraud.


Tech doesn't and can't replace hard work, experience, and insights

No alt text provided for this image

Let's start with a timely post from this morning. Jagadeesh wrote "Do you use an App fraud detection tool? Never rely only on them to solve your ad-fraud problem completely. Have your digital marketing and analytics team work continuously in parallel. I have worked with multiple tools and experienced many points of failure, both in detecting the big scale fraud and making the mistake of raising false alarms.

1. This happened with a non-MMP fraud detection tool.

Their dashboard, it reported that 35% of these installs as fraud. And the reasons are all usual suspects. It makes us realize that most of these tools are rule engines, and they are incentives to report fraud numbers to keep their customers happy.

2. The second example is from the MMP's integrated fraud detection tool.

We added a new affiliate partner, and suddenly our daily install volume tripled in the MMP dashboard. Very poor registrations from these installs. And no sign of installs increasing in any other systems like firebase, internal analytics system, etc.

The fraud detection system didn't even sense any issue for more than two weeks. We manually found the problem and took action.

So at least in today's context,?no tool is a replacement for your team's knowledge or intuition."

Valid points, and consistent with my experience with ad tech for at least the last decade. Don't get me wrong, the engineers building the tech are doing the best they can. Everyone realizes how hard it is to "fly the plane while still building it" or is it the other way around "build the plane while flying it?" But that's exactly what's happened in the last decade with programmatic ad tech. Rarely was there downtime to make fixes; and major code overhauls are nearly impossible because that meant turning off the revenue flywheel for a period of time. In any case, that's why we're seeing poor tech, outdated tech, and tech patched with bubblegum. When you are trying to process 15 trillion bid requests per week and make decisions whether to bid or not within 50 milliseconds, the technology is constantly pushed to its limits. And that is why we see gaps and failures.


Frequency caps not enforced

Even when I set frequency caps ("f-caps") to 1 per day and 1 for lifetime, data shows that I am still getting more than a dozen ads shown to the same user. In FouAnalytics, a fingerprint is an anonymous representation of a unique device/browser. The counts to the left of each fingerprint show the number of ads shown to that same unique device/browser. It is nowhere near the f-cap of 1 that I set in the campaign settings. This is not fraud, but this exposes the limits of the tech. Note that ad tech bros will blame this on Apple (boo hoo, won't let them collect device IDs so we can't do fcaps) and Firefox (boo hoo, wont let them set cookies so we can't do fcaps) and Google (boo hoo, doing away with 3rd party cookies in Chrome, so we can't do fcaps). Those are just excuses for the tech not working well, or at all. But don't cry for them, Argentina.

No alt text provided for this image


Supply path leakage

No alt text provided for this image

Even when I target a single domain, as an inclusion list experiment, a quarter of my ads didn't end up on the domain I wanted. In the chart below you will see that I specified 1 domain, but did not limit the exchanges. Only 73% of my ads went to the right sellerID (the one that the domain owner told me was the right one). That means that a quarter of my ads went somewhere else (some of those others were bundles which hid the fraud). When I specified 1 exchange (the one the domain owner told me they sold through), you can see that the percentage goes up to 97%, so only 3% of my ads went somewhere else. This is an example of supply path complexity and leakage. If there are more exchanges involved, and more possible supply paths, the "leakage" is far greater -- e.g. a quarter of the ads leaked elsewhere. By locking it down to 1 exchange and limiting the possible supply paths, the accuracy went up. Again, this is not fraud, just a limitation of the technology. So digital media buyers buying "direct" should not only specify the domain but also the single exchange that the seller prefers to work with -- there is no reason to allow your money to flow through dozens of supply paths, and thus leak out to fraud or something else.


Inventory type is not correct

No alt text provided for this image

I specified NO mobile apps. But FouAnalytics detection shows 50% of my ad impressions went to crap mobile apps. How convenient that craptech fraud detection reports don't break out which mobile apps. They dump all of it in a single row in the report called "mobile in-app" effectively hiding the crappy apps from your view. And they marked all of the mobile in-app as "fraud free."

This has been a constant and consistent complaint for ten years. Not even accounting for fraud where fraudsters deliberately falsify the domain or app name in the bid request, this is another example of the limits of the tech. The other limitation is that most are not looking. See: Billions of ads misdirected, and no one noticed . The tech is limited and humans make errors, on accident or purposefully.


Reporting shows unknown or not specified

What do you do if your placement reports show "(not set)" or "unknown" or "n/a?" How would you know where your ads ran? How would you know IF your ads ran at all? You don't know. The DSPs and exchanges are sending placement reports, but do buyers look closely enough to spot the problems? Missing domains and app names is one thing, does anyone else notice that most of those "domains" in the list below are not content domains like nytimes.com or wsj.com where ads actually ran? Mopub is the ad serving domain for mobile app ads, amazon-adsystem is the ad serving domain for Amazon ads, etc. You can't buy Facebook.com ads from open exchanges; similarly pandora.com ads. So even though they are providing domain level placement reports, are these reports even accurate or remotely helpful if the report itself says it didn't know where half of your ads went?

No alt text provided for this image


You paid for bids won, but did the ads serve, or render on screen?

No alt text provided for this image

It wasn't until 2017 that DoubleClick finally changed their counting method to downloaded impressions, from ads served. Ads "served" simply meant that the ad was called from the server (not that it was actually served) and even "downloaded" meant that the download process started, without any information about whether the ad finished downloading into the device and then rendering on screen. In FouAnalytics, the in-ad tag is set to asynchronous, which means it fires when the ad if finished rendering on screen. So it is a good measure of ads that finished downloading and rendered on screen so they can be seen. When combined with data from the DSP (# of bids won) and the ad server (# of ads served), we can study what percentage of the ads you paid for actually made it on-screen. In the slide to the right you can see 1 million ads were purchased, 931,000 ads were served, and 745,514 ads were rendered on screen. To put it succinctly, you only got 3/4 of what you paid for. This is not even counting the fraud, this is just the limits of the tech.


Max CPM bids are not enforced

No alt text provided for this image

In my campaign, when I set a max CPM bid of $0.10 (10 cent CPM bid), I was shocked to find that I had blown my entire day's budget in less than an hour. When I looked more closely, the data showed that I had paid between $0.24 - $0.32 CPMs. How did that happen? They are still investigating, but a breakout by exchange shows that ads were transacted on at least 7 exchanges at far higher CPMs than my $0.10 max.


Brand safety tech is crappy keyword lists, and causes more harm

No alt text provided for this image

Aside from the programmatic ad tech for serving ads, the underlying tech from other services like fraud detection, brand safety detection, and viewability are also "vacuums that don't suck" -- i.e. they don't work as advertised. The following example is one of many where a mainstream publisher's article on "winter crafts" is marked with a brand safety flag of "positive for animal cruelty." This flag means that the ads are blocked from the page and the publisher loses money, despite the fact that the brand safety label was non-sensically incorrect. Many other examples have been documented over the years, like 96% of Pulitzer price winning journalists' articles are marked as NOT brand safe , and therefore de-monetized; and brand safety tech blocking ads on the homepages of major newspapers because the word "covid-19" was on the page. The tech is no more than rudimentary keyword lists, and one vendor publicly blamed their own customers for defunding the news because they had not created whitelists of news publishers. Why are those customers paying that brand safety tech company for tech that doesn't work and that makes extra work for the customer? Bad, bad, bad, bad badtech.


Useless exclusion lists

No alt text provided for this image

Let's end with a funny example. deepsee.io reviewed the exclusion list of "one of the largest media agencies in the world, it has hundreds and hundreds of thousands of entries." What they found was that 71% of the entries were dead, gone. Those sites have long not been in use by fraudsters. Having so many domains in the list means the DSP and ad exchanges have to compare the domain in every bid request to hundreds of thousands of domains in the list. That is a huge computational burden. Guess what actually happens? They just scan the first 10,000 and skip the rest. After all they only have 50 milliseconds to make a decision whether to bid or not. So don't assume having large exclusion lists will help you stay out of trouble. Better to have a super tiny INCLUSION list with the domains of good publishers that have real human audiences. There's no way to block all the bad sites and apps, because bad guys can create unlimited more.


So What?

There are countless more examples of the limits and failures of the tech that I have documented over the last decade. I run my own experimental campaigns so I can see this data. I built the FouAnalytics platform and understand the limitations and nuances of what can be done with which tech and coding language. I have looked at data every day for the last ten years, and I can tell you the tech is not great. I'm not even going to ask you the hypothetical "are you buying badtech and craptech that does suck?" I will tell you straight up that you are wasting your money paying for fraud, viewability, brand safety detection and you are paying for programmatic ad tech that is not delivering what you expect.

You don't have to take my word for it. But you are welcome to use FouAnalytics to measure your campaigns so you can see what I've seen in the last 10 years and then make better decisions based on the new insights that you simply didn't have before because those other tools you had been paying for were very limited.


Further reading:

Concrete (Hard) Questions You Should Ask Your Verification Vendor

Thank you DoubleVerify for Exposing Yourself

Robert Pawlowicz

Head of Machine Learning for AIM at Kochava Inc

2 年

Couple of points to add to the post by Jagadeesh J... Point 1) most, but not all fraud detection tools are built the same. It is true that the majority of fraud tools use outdated rules and simplistic statistical thresholds. Even the more sophisticated platforms make inferences from very poor datasets. There are however, fraud detection tools that have been built around an understanding of how the fraud is perpetrated. For example, if you know how fake clicks are generated, you can catch publishers in the act. This is the primary principle used at Machine Advertising. We at Machine have one of the most sophisticated detection tools in the industry. However, we're not perfect, so your point stands that you need to keep your eyes open. Advertisers know their own product better than anyone. Use that knowledge to supplement an independent/third party fraud tool

Dr. Augustine Fou

FouAnalytics - "see Fou yourself" with better analytics

2 年

Jagadeesh J. cited your post in this article; thanks again for sharing your learnings and insight

要查看或添加评论,请登录

社区洞察

其他会员也浏览了