Blowing Away IAS' and DV's stupid non-response blog posts
Two weeks ago, the Wall Street Journal broke the news [1] that for over 3 years, Google misrepresented out-stream, muted, non-skippable, and off-page ads to be genuine "TrueView, skippable, in-stream video ads" which they sell at a premium. It was based on the 200-page research report from Adalytics [2] complete with screen shots, code snippets and lists of advertisers impacted.
In a key chart below, the research showed that in a campaign for a large brand 16% of the ads were valid TrueView ads that ran on YouTube. Another 11% of the ads were valid TrueView ads that ran on GVP ("Google Video Partner") sites and apps. A total of 27% (16% + 11%) of the ads were valid TrueView ads. 59% (16% divided by 27%) ran on YouTube and the remainder 41% (11% / 27%) ran on GVP sites and apps outside of YouTube. These were valid TrueView ads that ran both ON YouTube and outside of YouTube.
The other 73% of ads were INVALID TrueView. As you can see ALL of these invalid TrueView ads ran outside of YouTube. A variety of reasons made these ads "invalid" -- the ads were out-stream (which means they did not run with YouTube content videos), muted (not audible), off-page (not viewable), and in some cases non-skippable (skip button was covered up with something else like another display ad loaded on top of the video ad). About 3 in 4 ads were NOT TrueView ads, according to Google's own definition. Only 1 in 4 ads were up to the standard of a genuine TrueView ad. Much of the reporting appeared to focus just on the on-YouTube versus off-YouTube issue, which is an important point, but not the key point that these ads were misrepresented and sold as TrueView ads to 1,100 advertisers. One more key point to note, Google's?own documentation?shows that IVT ("invalid traffic") explicitly includes "Misrepresentation of in-stream and out-stream video." So these video ads that were out-stream ads misrepresented as in-stream ads fit the definition of IVT.
Google's blog post in response to the scandal -- https://blog.google/products/ads-commerce/transparency-and-brand-safety-on-google-video-partners/ was a perfect illustration of misdirection and obfuscation. They assembled a series of statements that were true on the surface, but did not address any of the specific evidence presented in the independent research report. For example, Google wrote "The overwhelming majority of video ad campaigns serve on YouTube." That is a true statement, because all campaigns run ads on YouTube and/or video partner sites and apps. That did not address the data that showed 84% of ads ran off of YouTube, of which 11% were valid TrueView and 73% were NOT VALID TrueView. I wrote a separate article refuting every lie, misdirection, and non-answer from Google's blog post on the matter -- https://www.dhirubhai.net/pulse/refuting-every-lie-misdirection-non-answer-googles-response-fou
Advertisers were left wondering why the legacy verification vendors did not catch any of this, across billions of impressions and over the last 3 years. Today, both DV and IAS wrote blog posts.
IAS's stupid non-response
IAS wrote a short blog post https://integralads.com/insider/ias-measurement-youtube-google-video-partners-inventory/ Screenshot below, in case they try to edit something to wiggle out of the lie. In the blog post below, IAS repeatedly lied, saying they "measured billions of ad impressions on YouTube and GVP inventory." They did not MEASURE anything because no third party javascript detection tags are allowed on YouTube or in any of the video ad units running outside of YouTube on GVP.
IAS, like DoubleVerify, are given data by YouTube through Ads Data Hub (ADH). NONE of these vendors actually measured anything with a javascript detection tag. Further, it is comical to see them citing numbers like "average Viewability rate was 93.56% and Invalid Traffic rate was 0.29%" with 2-decimal places of precision when they didn't actually measure anything and were just given the data, precision unknown. It is even more funny that they report Invalid Traffic is "0.29%" when Google's own definition of IVT shows that 3 in 4 ads (73% from above) were invalid because they involved out-stream ads mispresented as in-stream ads. IAS caught only 0.3% of the 73%.
This is certainly consistent with the ineptitude of their IVT/bot/fraud detection tech as well, which detects 1% fraud consistently when there's 60 - 95% fraud in the campaigns. Why do they always report such low fraud, even when the bots are obvious bots? It's because the bots are very good at blocking their detection tags. In the code snippet below, captured in 2014, we can see the tag-serving domain explicitly detected by the bots and blocked. The bot maker was smart enough to return a status 200, which tricks the server into thinking the tag was loaded when it was not. When the javascript detection tag is not loaded, IAS has no data with which to mark the bot as "invalid." This is the same problem we have documented for years; the YouTube scandal being the latest example of their failure to detect something so obviously invalid, to be invalid. They can't even do the job they are being paid to do.
In any case, any vendor can cite data from any campaign that shows more ads ran on YouTube than on GVP. Even the original research from Adalytics showed that more valid TrueView ads ran on YouTube (59%) than off of YouTube (41%). What IAS didn't address was why they failed to detect the 73% INVALID TrueView ads, or at the very least detect that they were NOT viewable or audible. They cited a bunch of stats that may very well be true, but they did not address ANY of the specific data from the original research. The moral of the story is 1) don't trust the data supplied by the vendor you are buying from - in this case YouTube; and 2) don't trust the fraud and viewability reports from legacy verification vendors because they didn't do any actual measurement with javascript detection tags and were simply performing calculations on data that YouTube gave them. THAT's the sham of "independent, third party verification."
领英推荐
DV's stupid non-response
DV wrote a blog post too - https://doubleverify.com/doubleverify-measurement-on-youtube-and-google-video-partners/ screenshotted below in case they try to edit anything afterward. More PR gibberish. They also lied when they wrote "DV measures fraud and viewability on media buys across Google Video Partners (GVP) inventory." They did not measure anything with a javascript tag. YouTube gave them data to perform calculations on. Further, since YouTube does not allow any third party detection tags, there's no place to put these tags when setting up a campaign. That means whether the ads ran on YouTube itself or on GVP, there are no javascript detection tags anywhere. DV did not measure any of the ads on GVP because there's no way to even get a tag into the video ad unit that runs on a GVP site or mobile app.
The key issue AGAIN is that they didn't measure anything with a javascript tag. DV was given data by YouTube and they faithfully reported viewable rates in the mid-90%; audible rate in the high 90% on YouTube and around 80% off YouTube. And SIVT ("sophisticated invalid traffic") at 1%. Again, Google's own definition of IVT shows 73% (3 in 4) of the ads cited by the research to be IVT - invalid. DV's blog post says it's 1%. The 1% is not all the fraud there is; it's all that they can catch. Here's further data.
They assume that no data means no fraud. Looking across the first row in the slide above, you can see nearly 800 million ads "monitored." That's the total count of ads. The column just to the left of that shows 83 million ads "measured" - that means measured with a javascript tag. That means roughly 1 in 10 ads were measured with a javascript tag. 9 in 10 ads were not measured at all. But the entire first row, nearly 800 million impressions, was marked as "100% fraud free." How is that possible if they DIDN'T measure 9 in 10 ads? In the YouTube case above, they DIDN'T measure 10 in 10 ads (they measured no ads); they were given the data to perform calculations on. Does anyone believe these joker's fraud, brand safety, and viewability numbers any more? How can they miss marking ads that ran off-page, muted, behind other display ads, etc. to be non-viewable? That was their job; they failed, going back 3 years, across 1,100 advertisers. Press them on this -- how many ads were actually measured with a javascript tag in your campaigns. That's a far simpler question than asking them to explain why something was marked as fraud or not marked as fraud (they won't know anyway). This is just asking them how many ads they even measured. at. all.
So what?
No one cares about the specifics here. Adalytics documented campaigns that had large portions of impressions misrepresented and sold as TrueView ads when they were not. DV and IAS each cited their data which shows a lot of ads ran on YouTube and few ads ran off of YouTube. They didn't address the ads that were INVALID by viewability standards and Google's definition of TrueView. These legacy verification vendors continue to fail at the job they were paid to do. And through this latest scandal we now know that they are not actually measuring anything on YouTube with a javascript tag; YouTube gives them data and they perform calculations for viewability, audibility, and IVT. Who thinks this should be called "independent, third party verification"? Who thinks it's OK for them to assume "no data is no fraud" and report to you nearly "100% fraud free" when they DIDN'T measure 9 in 10 ads? You should NOT trust the vendor you are buying from (YouTube) nor the sham verification vendors they use to cover up the truth (report 95% viewable, 97% audible, and 1% SIVT).
You should have your own analytics for ads. I built FouAnalytics as a toolset for myself to audit advertisers' campaigns, because I didn't trust anyone else's tech or their data. That was 11 years ago. I still don't trust anyone else's tech or data today, nor should you. I opened the platform up to others in 2020. You are welcome to use FouAnalytics at no cost. Since we cannot measure the ads on YouTube or GVP (because there's no way to add a third party tag), you place the FouAnalytics on-site tag on the landing pages. I am running YouTube campaigns myself, and I have made sure to turn off GVP (video partners) so my ads remain on YouTube itself. The clicks I am getting (right side below) are great (low dark red) and my YouTube campaigns are working nicely. I can see it in the FouAnalytics data, so I am confident in how I set up and run these campaigns.
Watch more:?https://www.youtube.com/@augustinefou
Ad-Fraud Investigator & Media Expert, member of Digital Forensic Research Lab cohort "Digital Sherlocks" - Adding some fun when asking unexpected questions you were not prepared to hear
1 年As usual: follow the money stream. It won't take long and PR machinery will flood the market with AI and other "fancy" stuff, so that ppl will forget (and forgive?) that they have lost lots of money with crap digital marketing measures. Remember this latin saying: "populum Romanum duabus praecipue rebus, annona et spectaculis, teneri." = "...that the Roman people could be held under the spell by two things in particular, wheat and plays", i.e. entertain them and they will forget everything.
Founder & CEO @ Sekel Tech | Discovery Platform | Data platform | Demand Generation Platform
1 年It is no secret why no one cares about the specifics, to be precise big media buying agencies. Analytics documented campaigns that had large portions of impressions misrepresented and sold as TrueView ads when they were not. DV and IAS each cited their data which shows many ads ran on YouTube and few ads ran off of YouTube. They didn't address the ads that were INVALID by viewability standards and Google's definition of TrueView. These legacy verification vendors continue to fail at the job they were paid to do. And ?? the best part is that these media agencies reward each other for best-performing ads in major ad festivals. ??
Senior Data Science & AI/Marketing Professional
1 年Tl;dr Adalytics: check this out, wtf Wsj: wait, what? Big g: [jazz hands] while reciting haiku Dv and ias: yeah, what he said Fou: not so fast