Introducing H-Index (Humanness Index) from FouAnalytics
Everyone knows I've been harping on how bad and fraudulent things were in programmatic channels for the last decade. Time to turn this around and celebrate the good publishers, who have resisted for years the temptation to juice their own revenues through fraudulent means, like bought traffic. Everyone also knows that there are simply too many bad sites and apps to add to a block list, and that an inclusion list approach is the way to go, going forward. What follows is a way for you to do that, and "save face."
Low CPM prices are NOT lower costs and ARE higher risk
You may have previously seen my article Human CPM (hCPM) and Marketing to Humans. The main point was that real publishers have real human audiences. Sellers in programmatic channels may have some humans, but the numbers are small, very small. These are the sites and apps that few humans have heard of, and few humans visit (sites) or use (apps). That's why they have to buy traffic in order to make more ad revenue.
The other key concept is that you may think you are getting a good deal because CPM prices have dropped for the last decade and you are buying ads at far cheaper prices than you were a decade ago -- e.g. $3 CPMs in programmatic channels versus $30 CPMs direct from publishers. But make no mistake, you are not saving any costs. That's because CPMs are prices, not cost. And when you buy ten times more quantity at $3 prices, you are still spending $30 of cost. So you didn't save any costs, even though the prices appear to be 1/10th of what are when buying from real publishers. But now, because you are buying low price inventory from exchanges, you are at higher risk of fraud and waste.
IVT is not enough and not good enough
Further, for the last decade, ad buyers have wasted more money paying legacy fraud verification vendors, whose tech consistently reported low invalid traffic ("IVT") -- around 1%. Advertisers now realize that the low fraud numbers were not because fraud was actually low, but because the legacy fraud vendors couldn't detect most of it. Bots have been actively blocking their tags so they have no data; without any data, the legacy fraud vendors could not label anything as IVT/fraud. Let me re-emphasize, their own reports show that "no data" and "not measured" are reported as "no fraud." That is not right.
But low IVT numbers doesn't mean high humans. None of these verification vendors measure for humans, even the one named "human." If you focus on humans, it will change your perspective and free you from the "low CPM" trap.
Let's do a thought exercise. From real live campaign data measured by FouAnalytics, I can see 50% dark red in a programmatic media buy. That means half of the impressions are going to "confirmed fraud" sources. You can think of the above as 1) "I didn't need to buy 50% of the impressions because those ads were not shown to humans anyway," or 2) "I need to buy 2X more quantity to get my ads in front of something that was 'not fraud'," or 3) "my actual CPM would be 2X higher if I subtracted out the useless ads shown to bots."
hCPM - humanCPM
Now, if we focus on the clicks that arrive on your landing pages, we can see the ratio of humans (dark blue) versus bots (orange and red). Remember the slide above with 15 cases of clicks from programmatic campaigns, as measured by FouAnalytics on the landing pages? If you squint, you may be able to see the dark blue (humans) clicks. Let's say one of those pie charts shows 10% dark blue. That means 10% of the clicks were human; the other 90% were not human. How much more ad impressions would you need to buy to get 100% human clicks? Right, 10X more. That means that even though you paid $3 CPMs as the unit price, you'd need to spend ten times more -- $30 -- to get an equivalent of human clicks.
领英推荐
What if only 5% of the clicks were dark blue? Then you'd need to buy 20 times more ad impressions to get an equivalent of clicks from humans. What if 2% of the clicks were humans (as many of the above charts show), that means your effective CPM is 50X higher than what was reported to you by your media agencies. They report blended average CPMs and claim they got you better prices. The term they keep using -- "cost efficiency" -- is misleading as hell, right? It was hidden from you that you actually paid 20 - 50 times more AND your ads went to crapps (crappy mobile apps), fake sites, and breitbart. Not good.
To drive this point home one more time, in the slide above, you see five paid display sources. Source 1 has the highest CPM prices, while Source 5 has the lowest CPM prices. Using old CPM based thinking, you'd probably want to buy more from Source 5 because CPM prices are lower. But look on the right side at the color coding -- dark blue (humans) vs dark red (bots). Source 1 has a lot more dark blue than Source 5. Taking humanness into account and focusing on just the humans that arrived on the site, the relative hCPM for Source 5 is 11X (eleven times) higher than for Source 1. Because you are getting so few (dark blue) humans from Source 5, your actual hCPMs are that much higher. You didn't get a good deal. Using new hCPM based thinking, you'd realize that you should buy more from Source 1 because more of your ads are shown to humans and hCPMs are lower. hCPMs force you to focus on humans and it helps you break free from the low CPM "cost efficiency" lie. And it meshes with the "optimize towards humans, not just away from bots and fraud" concept.
H-Index from FouAnalytics
Let me bring this home by introducing H-Index (Humanness-Index) by FouAnalytics. Based on more than 10 years of data, directly measured on good publishers' sites and in programmatic ads, I have historical data on which publishers can actually be considered good publishers. Not only have they avoided "bought traffic," they also have high humans, human audiences. The excerpted slide from 2015 below, shows what good publishers look like -- lots of dark blue, very little dark red. Based on these measurements -- high humans and low bots -- publishers' domains are given a "humanness score."
I have drastically simplified the computations from 2015 to arrive at an index score -- i.e. all indexed against each other -- the Humanness Index or "H-Index" for short. The H-Index is a number between 0 and 100. 100 is the best -- i.e. highest humans and lowest fraud. The requirement for publishers to get a score of 100 is to have FouAnalytics code directly measuring on the site and at least 6 months of data. In the table below you will see the highest ranked good publishers, both sites and apps (further below). Interestingly you will recognize these sites and and apps, right?
Hopefully, the H-Index from FouAnalytics will help you curate inclusion lists of sites and apps to use in your programmatic campaigns. Be aware that these lists will be super tiny, the number of impressions you will be able to buy will be orders of magnitude less than you are used to, and the CPM prices will be way higher. But, you wont need to buy billions of ad impressions any more, where 50% of more of the ads are not shown to humans anyway. If you are getting lots of your ads in front of humans, your actual outcomes will be better and your cost per acquisition will still be lower.
Good publishers, please get in touch. Note that publishers that cheat with bot/bought traffic and other forms of fraud will not put FouAnalytics on their sites because they will be caught, and their H-Index will be 0.
Further reading: How Site-Owners Use FouAnalytics to Troubleshoot Bot Traffic
Senior Digital Consultant at MediaSense / Chartered Manager
1 年Dr. Augustine Fou?very interesting, along with all your work. How could an advertiser factor in viewability alongside the humanness index to calculate a viewable hCPM? For example if two publishers with a H score of 99 and the same hCPM have differing viewability, should they be measured the same?
CEO|Sales Director|Plc′s|Corporate|Start Ups|iGaming|Payments|Crypto|FX
1 年Great initiative, hope you get traction
Ad-Fraud Investigator & Media Expert, member of Digital Forensic Research Lab cohort "Digital Sherlocks" - Adding some fun when asking unexpected questions you were not prepared to hear
1 年How would all those middlemen/women survive if we cut all crap traffic?? ????
Senior Data Science-Marketing Professional
1 年Brilliant...are those based on all data available? It would be interesting to understand trends over time with these but that is secondary.
Co-Founder at AdChat.ai | Raising Engagement Across the Funnel with Conversational Advertising
1 年Thanks for spelling that out so clearly. Our product has analytics signals baked into a rich creatives, so we can spot bots early as well (since bots don't act like humans). It's really great that you can measure it so succinctly though - these bar graphs are great.