OLV (online video) One advertiser's "oh, f**k" moment seeing FouAnalytics data
The "Oh, f*******k!" moment
"Oh, f*****************************************************************************************k!" exclaimed a marketing manager from a global beverage advertiser when we first reviewed FouAnalytics data on a video campaign. "It's mostly white!" (white in FouAnalytics means there was no javascript parameters collected). He realized that the billions of OLV impressions he was buying could be entirely fabricated, run on fraudulent sites and apps, run without sound, or not run at all. That means that potentially 100% of his video ad spend could be useless. "Oh, f***k!"
Video ads were predominantly the form of ads they purchased across all geographies. Looking at FouAnalytics data was the first time he realized how UNMEASURABLE the ads were, given current standards (large amount of white in FouAnalytics dashboard). The VAST standard only allows a static .gif pixel for verification of the video ads. Static .gif pixels can't collect any of the necessary information to detect sound on or play duration. Only javascript tags can DETECT whether the sound was on or not, whether the video ad was playing or not, and how far the ad was played.
Furthermore, static .gif pixels are used to report "completion rates." Note the firstQuartile, midpoint, thirdQuartile, and complete.gif pixels in the IAB specifications. If you've been seeing 80 - 90% completion rates in the reports you were given, that means the fraudsters were invoking the complete.gif measurement pixel to falsify the reporting and make it appear that most of your video ads were watched to completion, even if your ads never ran at all.
Good publisher honestly and correctly invokes tracking pixels
When working with an honest publisher that correctly deployed OMSDK and honestly invokes the tracking pixels at the correct times -- firstQuartile, midpoint, thirdQuartile, and complete -- you see the reality of video ads. See the FouAnalytics data below from a live video ad campaign. In example A, out of 20,578 video ad impressions, 23 video ads were played to the firstQuartile. 17 were played to midpoint. 14 were played to thirdQuartile, and 10 were played to completion. In example B, out of 1,000,000 impressions, 93 video ads were played to firstQuartile, 77 reached "midpoint", 59 reached thirdQuartile, and 38 were watched to completion.
What completion rates are being reported to you? I bet you, it wasn't this. Time to re-check your video ads, or at least use FouAnalytics to measure them more closely. Don't just let the platforms grade their own homework and convince you to keep spending.
The failure of legacy fraud verification vendors
The legacy verification vendors paid to measure video ad fraud for advertisers failed to tell their own customers that less than 1 in 1000, or virtually zero, video ads were actually measured by a javascript tag. This means they did not collect the data necessary to detect obvious bots and obvious fake sites fabricating the video ad impressions and falsifying the completion rates and placement reports. You don't have to take my word on this. Just ask them to pull a placement report, at the domain level, and be sure to include the following 2 columns: 1) monitored ads, and 2) measured impressions. The first "monitored ads" is the quantity of ads you paid them to monitor for you -- for fraud, brand safety, etc. The latter "measured impressions" is the number of impressions actually measured with javascript tags. Without javascript to detect if the sound was on, if the ad was played, and how long the ad was played, these legacy vendors should NOT have reported "no problems."
The combination of 1) legacy verification vendors' failure to detect fraud and inaudible ads, and 2) reports that showed high completion rates are what led advertisers to believe their video ads were run correctly, as they expected.
What did he go do?
He turned off the video campaign entirely. And he spun up display campaigns, that are achieving their awareness goals, a much lower CPM, which means they can actually buy more real impressions, shown to humans.
Optimize towards humans with FouAnalytics
Another large advertiser (consumer packaged goods) chose to stay with video ads. Note the optimizations they made in the chart below. They systematically optimized towards humans (the amount of dark blue went up after each change). When you run video ads on real publishers' sites, they will be shown to users in-stream, pre-roll, with the option to turn sound on. And completion rates will be reported to you correctly. All of that is not the case when you run video ads on programmatic sites and apps that are the same MFA sites and apps doing shady things with display ads. You're just wasting more money because you're buying video ads from them.
Run display campaigns to build your own inclusion list for video
Another process we put in place is one that allows us to discover new, good sites and apps to add to our inclusion lists for video campaigns. We run lower cost display ads, which are measurable with FouAnalytics javascript tags. We look at actual human engagement with the ads and also attentiveness of the humans that click through to the landing page. We find the sites and apps that are sending attentive humans to the site and we build an inclusion list of these sites and apps or add them to our existing inclusion lists. These sites and apps that have been proven to be good for display ads are likely to be good for your video ads too.
For more screen shots and case examples, see and subscribe here: https://www.dhirubhai.net/in/augustinefou/recent-activity/newsletter/