Benchmarking is not a useful exercise. Here's what to do instead.
A few days ago, Digiday reported that the "ANA is planning a programmatic benchmarking service." That's a nice idea. But, it may not be that useful and it may even distract advertisers from the real issues at hand. Why? Because fraud and waste hide easily in averages.
Remember the following slide from 2016? What do you do when you see a click through rate of 9.4%? You might celebrate and report to your boss that your campaigns are doing very well, they are "over-benchmark," because typical click through rates are in the 1% range. What that average does is hides the fraud that would be easily identified if individual line item details were provided. For example, when we looked at the click rates from each of the top sites in the campaign we notice the impossible 100% click rate -- every ad impression was clicked on -- not by humans but by bots trying to earn the cost-per-click revenue share (they have to click in order to get the CPC revenue).
Benchmarking, which involves blending the results of 35 advertisers into averages of viewability, fraud, brand safety, means the details are washed out. That means you wont have enough details to do anything. You won't see those sites and apps that are committing 100% fraud because it's blended away into the average. Therefore you wont know which sites and apps should be added to block lists. What does it mean, then, if you over-index the industry-wide average benchmark or under-index it? What are the actions you could take as a result? Those actions become less clear when you're comparing yourself to a blended average and the necessary details are lost.
What's a good number (for fraud, viewability, bounce rate, time on site, etc.)?
Let me use another simple example to illustrate -- time on site. Is MORE time on site better or LESS time on site better? Perhaps your instinct will say "more time on site is better." But, have you considered that the user spent more time on your site because they couldn't find what they wanted? And left pissed off at you? What about lower time on site? Is that worse? Have you considered the cases where someone found exactly what they needed on your site very quickly and left very happy? Over the years, when people asked me what's a good "time on site?" I tell them there is no universal benchmark of what is a good time on site. For medical journal sites, doctors visit, read one article to find the answer they need, and leave. Doctors don't generate a ton of pageviews per session. On the other hand, users can spend hours looking at cute kitten pictures and funny memes. Is that long time on site inherently better than the short time that doctors spend on medical journal sites? You get the point -- that there's no universally good benchmark.
So what's the answer? For EACH advertiser, the best benchmark to compare to would be their organic and direct visitors. Organic visitors arrive on the site after clicking an organic search result. Direct visitors come to your site directly because they know you already. How much time do they typically spend on your site? How many pages do they look at per session? These tell you the nature of visitors who want to be on your site, or who came to your site to look for more information. You can then compare the visitors arriving from your various paid media sources like paid search, paid social, programmatic display, video ads, etc. Do these over-index or under-index the comparable characteristics for organic and direct visitors? If the visitors from your paid media over-index your own internal benchmark, then they are valuable visitors. But if the paid visitors left right away, didn't do anything on the site, etc. then they are lower value.
What's the benchmark fraud rate you'd report?
Now, let me show you some in-ad measurements of some of the largest advertisers, fully anonymized of course. You tell me how you would blend these stats together and whether those blended averages would be useful. These are 6 display ad campaigns out of hundreds and hundreds. What's the average fraud rate? Dark red appears to be 13%, 15%, 25%, 69%, 56%, and 17%. What average should we report in the benchmark. Should it be a weighted average (accounting for the quantities) or a straight up average of those 6 numbers -- 32%?
领英推荐
What's the average human rate (dark blue)? 27%, 6%, 22%, 6%, can't see it, 52%. What average should we report in the benchmark? The white area means "not measurable" or "no javascript data" - what's the average "not measurable" percentage we should have in the benchmark report? Finally, notice the yellow line overlaid on the green volume bars? That represents the viewability of the display ads. Some of the charts show 50% average viewability across the time range, other charts show nearly 0% viewability. What about that large surge in red bots and green volume in the 4th advertiser's chart. Note the viewability (yellow line) also jumped up to 90%. These are typical of MFA sites, which show exactly what advertisers want -- high viewability, high clicks, low fraud (according to legacy vendors that can't catch more than 1% of the fraud).
Do you want nice sports scores to look at and compare yourself to, or do you want detailed analytics that you can act on?
Anyway, I hope the 6 wildly different examples above are enough to convince you that a benchmarking exercise to get industry-wide averages may not be worth the effort and may not yield actionable insights anyway. Do you want nice sports scores to look at and compare yourself to, or do you want detailed analytics that you can act on? The recommendations for each of the 6 advertisers above are entirely different. They don't need to compare themselves to an industry-wide benchmark. Each advertiser needs to know what specifically they can do in their own specific case.
Here's how advanced advertisers are using FouAnalytics to monitor and manage their own campaigns - https://www.dhirubhai.net/pulse/advertisers-take-back-control-fouanalytics-dr-augustine-fou-kbece/
Here's how ecommerce advertisers use FouAnalytics to optimize their digital media - https://www.dhirubhai.net/pulse/how-ecommerce-advertisers-use-fouanalytics-dr-augustine-fou-tcsre
Here's a campaign that I am managing (dashboard screenshot below) - https://www.dhirubhai.net/pulse/campaign-managed-me-fouanalytics-0df4e
Further reading: 626 other articles on digital marketing, analytics, and ad fraud
Managing Partner at Wizard of Ads Online
8 个月In comparing Direct and Organic traffic looking for patterns, I would argue that it makes sense to pair Branded Organic Search with Direct, and compare those two with non-branded Organic search. This will give a more fair comparison. Thanks again for sharing some great insights that are outside of what most people expect to see.
SME- Retired (1/31/2024)
8 个月Very Interesting Dr. Fou, thank you.
Principal
8 个月Interesting piece from Dr. Fou - glad to see him address how fraud can hide in the averages of KPI reporting