Truly Valuable (Expensive) Lesson Learned
What happens when you have a crazy idea on a Friday afternoon to run another experiment by multiplying your CPM by 100X to see what the results are? I've been running 1 cent CPM campaigns continuously for the last 3 years, to keep tabs on all the vile bottom-of-the-barrel sh*t that goes on with new fraud domains and apps that come and go.
But I wanted to see what happened when I raised the CPM by 100X to $1 CPMs -- would I get any better apps and domains showing my ads? Would click through rates change? Would I see any more humans or any less bots in the data?
Before you read on, formulate your own answers in your mind first - what would you expect to see?
I'll give you some highlights and observations here. Anyone interested in the more detailed results and reports can contact me privately to discuss in more detail.
- mobile app frenzy, no views were measurable - in the $1 experiment, vast majority of the impressions were eaten up by mobile apps, 100% of which views could not be measured; take a closer look at the apps
- mostly sites, views could be measured - in the 1 cent experiment, most of the impressions were on sites, and views could be measured; take a closer look at the domains (would you want your ads on any of these? have you even heard of any of these domains?)
DSP report Feb 7th - First 50 rows of $1 CPM experiment
DSP report Feb 4th - First 50 rows of $0.01 CPM (1 cent) experiment
Ad Server Data (we run our own Revive ad server)
Now let's look at the ad server results. Feb 4th was all 1 cent inventory. Note the strange consistency of the clicks and click through rates, with an average of about 0.2%. Feb 7th (yesterday afternoon) was when I turned on the $1 CPM experiment. Note that all of the impressions were blown out in less than 2 hours - 16:00 - 17:59 ET.
- no change in average CTR of 0.2% - strangely the CTRs remained 0.2% on average. Keep in mind that virtually all of the 1 cent inventory on Feb 4th was from sites and virtually all of the $1 inventory from Feb 7th was from mobile apps. But yet, regardless of impression quantity, the clicks and therefore the CTRs hovered around 0.2% hour after hour.
- large discrepancies between DSP and ad server counts - for the 1 cent inventory, the ad server reported about 970k ads served, when the DSP report above totaled 680k impressions won (50% more ads were served than impressions won); for the $1 inventory, the ad server reported 1.1 million ads served while the DSP report showed 2.2 million impressions won (50% less ads were served than bids won).
#FouAnalytics details - in-ad tag in every ad impression
- the Feb 4th data shows 1 cent CPM inventory; the quantities 943k come relatively close to the ad server quantity of 969k; note the 29% confirmed fraud (dark red)
- the Feb 7th data shows the $1 CPM experiment; the quantities measured by the #FouAnalytics tag (2.3 million) match the bids won from the DSP (2.2 million) but are larger than the number from the ad server (1.1 million). I don't have an explanation of how this occurs. Anyone with ideas, please let me know. Note the 31% confirmed fraud (dark red).
- There is literally no difference in fraud rate between the ongoing 1 cent experiment and the $1 CPM experiment yesterday, even though the sites and apps on which the ads ran were completely different.
Finally, a fun chart from AWs that shows the surge in traffic (from our ad tag being called) yesterday afternoon. Note the times are in GMT instead of EDT.
My takeaways are that if you buy inventory through programmatic, you will get lots of volume and lots of fraud. The levels of fraud and strangeness in the data (e.g. all CTRs are the same at all times) does not change whether you pay 1 cent CPMs of 100X that. My hypothesis is that it will be no different for $10 CPM inventory purchased through programmatic exchanges (regardless of whether it is PMP or open exchange).
Note that all of this is paid for on my own credit card. Hopefully enough small business owners read this so they don't have to go through their own painful experiment/experience with sucky digital ads and blow nearly $3,000 in about an hour on a Friday afternoon. The only thing I can conclude from this is that regardless of what CPM you pay, I can find no evidence that supports that my ads were shown to humans visiting sites or using mobile apps.
For contrast, see below for a super long term chart (dating back to 2016) of what a good publisher's site looks like. The only way to get quality "inventory" is to buy it directly from good publishers. No amount of bot detection or brand safety detection will help you detect your way out of trouble.
Sadly, despite the overwhelming evidence from the last 8 years, big brand advertisers and their trade associations - ANA, IAB, 4As, TAG, MMA -- and CMOs who love to spend budget and get invited to speak at conferences, will continue to tow the industry storyline that fraud is low and they they solved it; so they keep buying.
About Me: “I consult for advertisers and publishers who actually want to know the truth and who have the courage to do something when they find ad fraud. I am not a fraud detection tech company that relies on fraud to continue. I show my clients the supporting data so they can understand and verify for themselves what is fraud and what is not fraud. If they agree, they can take the necessary actions to eliminate the fraud while campaigns are still running, rather than post-mortem fraud reports and trying to get their money back.”
#FouAnalytics details here - https://www.dhirubhai.net/pulse/fouanalytics-alternative-google-analytics-fraud/
Follow me here on LinkedIn (click) and on Twitter @acfou (click)
Further reading: https://www.slideshare.net/augustinefou/presentations
Director, Marketing & Growth Analytics
2 年I could see why adserver and DSP have differences in terms of reporting (they never match), buying from publisher directly also seems to deliver better quality inventory ime. But in your test…. It’s almost ALL fraud ??
The Business Growth Locksmith | A Global Community Driven Relocation Marketplace
4 年This is so important for all businesses that are looking for headline stats & believing they are getting value. Great piece Dr. Augustine Fou - Ad Fraud Investigator
Were you using multiple exchange sources or a single source? I would reach out to the exchange directly and compare log file detail. Let them know your inquiry is because of the large discrepancy in reported impressions. Have you also considered using a 3rd party solution to measure fraud to ensure your own tech is calibrated to what most advertisers are doing in market on a ongoing basis? I checked out a few of those sites on you TLD list and it’s just amazing at some of the stuff being monetized on the open web.
Self employed consultant.
5 年Great example of why a traditional media audit doesn't cut it for digital media. A digital media auditor has to be able to dive into details of the DSP, win vs served rates, understand what tags can and cannot do plus compare what's happening in the ad server versus what's showing up on the site. Then throw in what was paid for versus what was received.
Head of Digital Transformation
5 年Great experiments, thanks for sharing. Which DSP did you use?