Revisiting Viewability - Don't Trust, Always Verify

What if only 0% - 10% of your ads were viewable, instead of the 70%+ that you are used to seeing in reports? You thought viewability was an issue that was taken care of years ago? That's what verification vendors and your agency keep telling you.

OK, they're not deliberately lying to you. They simply don't know and they don't know how their tech works or IF their tech works at all. Who am I talking about? The sales reps and account folks that work at legacy fraud and viewability verification vendors. What am I talking about? Whether they are detecting viewability at all, let alone detecting viewability correctly.


From a vendor's own report

Let me start by walking you through one of the vendor's reports, focusing on "viewable rate." The numbers help you focus on where to look.

Notice in the area of [1] that "delivery site, app bundle, and app name" are all N/A, which means they have no data on any of this. But yet on the right hand side of that row, you see near [3] that viewable rate is 84%. Isn't it strange that for no delivery site or app, that they could report 84% viewability? Looking more closely at [2] shows that only 2% of the impressions were measured with a javascript tag (5,798,596 measured impressions divided by 353,504,562 monitored ads). If they only measured 2% of the ads with a javascript tag, how could they report that 84% of the ads were viewable? Detecting viewability accurately REQUIRES the running of a javascript tag. If you're loading a static .gif tracking pixel, it can only do "counting" (the number of times it was loaded); it does not detect anything like a javascript tag can detect.

In this case, they are reporting 84% viewability when they have no data on the delivery site, app bundle or app name. That's because they only measured 5.8 million of the ads with a javascript tag, out of the 353.5 million total impressions 2% measurement rate). If you're the advertiser paying for this service, would you be OK with this -- that they didn't measure it, but still reported 84% of the impressions were viewable?

Now look at [4] and [5]. This vendor only measured 17 million impressions with a javascript tag out of 159.5 million impressions (11% or 1 in 10). And they are telling you it's 91% viewable. Do I have to say the quiet part aloud; or can you draw your own conclusions about whether the viewability numbers reported by this vendor are sus?

Keep in mind that the measurement issues mentioned above are related to the tech of the verification vendors. On top of that, bad guys actively falsify viewability measurements too. In 2018, Newsweek was caught running malicious code that falsified viewability measurements so that 100% of their ads appeared to be 100% viewable 100% of the time. See: https://www.buzzfeednews.com/article/craigsilverman/newsweek-ibt-malicious-code-ad-fraud

With the above in mind, how much confidence do you have in the accuracy of the viewability numbers being reported to you right now? Again, let me stress that it's not the employees of the verification vendors or your agency who are lying to you. They simply don't know and have no way of knowing whether the viewability numbers in the reports are accurate or not. So the just report the numbers to you as-is. Looking at the reports yourself, you will be able to spot the errors.


Note in the examples above, viewability in ongoing campaigns can be wildly different -- from 0.3% viewable to 47.6% viewable. Viewable:1 means the ad had an opportunity to be seen (50% of the pixels in the viewport), and the percentage to the left shows you what percent of the ads in a campaign are marked as viewable by FouAnalytics. For some background on the methodology and ingredients that go into viewability measurement in FouAnalytics, see https://www.dhirubhai.net/pulse/viewability-ivt-out-of-geo-brand-safety-fouanalytics/ Also note that the opportunity to be seen is different from whether the ad was actually seen. Note that sometimes the ad doesn't arrive in time to be displayed on screen. For example, when you are scrolling down a page on your smartphone and see an ad slot that is not yet filled with an ad. You keep scrolling, and by the time the ad fills in, you're already way further down the page and did not see it.

Why didn't the MRC catch all this during the accreditation process? The MRC accredits vendors for measuring what they said they WOULD measure. That is different from measuring viewability correctly. And the MRC also does not check live campaigns and whether the tech is working as intended. One thing that SHOULD be corrected in the accreditation is when the vendors didn't measure the ad impression with a javascript tag, they should NOT be allowed to label the ad as "viewable" or "fraud free." That is literally misleading.

For those advertisers measuring ads with an in-ad FouAnalytics tag, you can see the average viewability (yellow line) overlaid on top of the green volume bars at the bottom third of the time series chart. Eyeballing the example below, viewability is about 50% -- not the 80 - 94% reported by other legacy verification vendors. And in the FouAnalytics platform, we only report viewability if we measured it with a javascript tag.

More articles with screen shots and examples here: https://www.dhirubhai.net/today/author/augustinefou


Let me know if you want to use FouAnalytics to "see Fou yourself" what the real rate of viewability is in your existing campaigns. There is no cost to run a pilot. And you just copy and paste a FouAnalytcs JavaScript tag into your DSP or ad server (or have your agency do that for you).




Marc Dhalluin

Would Your BRAND ask you to LEAD it NO MATTER What? THINK. Carefully.

12 个月

Dr. Augustine Fou in the rows where there was clearly insufficient measurement yet a viewability score is provided, where does that viewability score come from? If it relates to the viewability of the impressions measured, are the legacy vendors then relying on statistical sampling? If so, how often will this be inaccurate enough to be of concern?

回复
Fredrik Hallberg

We bring scientific evidence to business decisions

1 年

The metric opportunity to see (OTS) or average frequency is based on how many people had the opportunity to see the ads. Client- and server data don’t measure people so it is not really OTS either.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了