Data was meant to make ads more relevant. How come its become the primary cause of more bad and fraudulent ad experiences than ever before?

Data was meant to make ads more relevant. How come its become the primary cause of more bad and fraudulent ad experiences than ever before?

The reports last week that finance media brand Forbes had allegedly operated Made for Advertising subdomains created the same ‘how does this happen’ questions that the most recent ANA report on programmatic waste sparked .

And these were the same questions sparked in 2017 by the initial Adfin and AANA report .

And they’re likely to be the same questions asked in 2025 and probably in 2030.

How does reputable brand advertising appear on low quality and/or fraudulent websites delivered in ways (multiple ads per page, over the page etc) that are incongruent with the brand being advertised.

Let me explain.

The increased push and overpromise of data segments is the primary cause here

Marketers have been sold for a decade that data driven marketing is the key to success, investments in platforms that enable this (CDPs, DMPs, optimisation suites, clean rooms etc) is now likely a billion dollar industry domestically and marketers are feeling pressure to activate against these significant capital outlays. On top of this there is an entire industry of 3P data brokers who enable other businesses who masquerade as 1P data businesses. So there are literally millions of off the shelf data segments that need media to activate against.

Only Meta and YouTube really have the ubiquity and scale to provide these ‘data driven’ audiences at any scale within an individual environment, and that requires the data to be ported to them. Google Display Network (which is a net of millions of sites) sees data ingested by Google, but the ads served by Google on sites not owned by Google.

If you want to activate 1P segments on these platforms you need to upload the data in order to create a ‘match’. This means the 1P data quickly becomes their data too. But most advertisers want to activate outside of Meta and YouTube, especially those that package data plus inventory into a wrapped cost. High quality destinations can often activate this data too, but they more often than not cannot provide the match rate scale required on their own OR they’re deemed too expensive in comparison with the floor prices of the open web. This opens the door for millions of random websites.

Outside of Meta and YouTube, the low standard 1P or 3P data “match rates” require the widest net as possible in order to demonstrate the perception of efficacy to advertisers and deliver volume

Any data segment needs to match a “customer” with a piece of ad inventory. This is not as easy as its made out. It’s a race to find these users across the web, across devices, and these users might be contained within 1000 other segments. So often “matching” a user and the opportunity to expose a brand to them can be in the 30% range or lower. This turns a 50k 1P list into a 15k one. Once viewability, click rate, landing rate, on site activity etc is considered this can be a tonne of work for maybe 5-10 people to do something. BUT - the wider the net, and the wider the “post impression” window (which logs an impression to a user, but doesn’t require an action) the better the perception of return on investment.

A wide net is also required for participants to make material revenue - data fees, ad fees, transactional costs. These are all larger when there’s more volume.

No one in ad tech wants to serve 10,000 impressions and make a $3-5 CPM. That’s $30-50 of revenue gross. What you want is millions of impressions. With millions of impressions the economics are 100x better for the DSP, SSP, data provider, ad verification company etc. Volume is the game here.

The wide net opens up misalignment between advertiser and participants in the advertising chain

Advertisers aren’t wanting millions upon millions of impressions - it’s wasteful and there’s little to suggest 10-40x exposures to one person does anything more than 2-4 exposures. Plus it’s extra cost. But the system they’re working in is predicated entirely on volume as charges are all pass through and based on transaction volume. Data is charged at CPM, clean rooms often charge the same, plus verification, DSP, SSP, ad serving - there’s a huge incentive to generate volume here in order to scale revenue.

As a result systemic MFA becomes more prevalent, as does ad stacking (where 10-30 ads are placed on one page to drop cookies, game attribution and generate CPM income) and use of traffic juicing activity (slideshows, ad refresh after small duration time)

MFA is really designed to create a way to attach a data segment to a user impression. The more valuable the segment/impression the better. It’s an initiative that has popped up because it can generate high yield revenue with cheap operational costs. And because the perception of result is positive (more matched users) there’s little incentive to investigate too much. Hence why you see businesses like Adalalytics and Fou Analytics as the main groups finding these things out.

If you ask your advertising partners they will all say this is happening but they don’t do it and have protections in place

This might be true but it might not be. But if you’re activating either 1P data segments or 3P packaged segments outside of Meta and YouTube it’s worth being sure what is happening.

At a panel I spoke at recently I was asked what I would do tomorrow to clean this up if I was a marketer. My actions would be

  • Consolidate the domains my ads appear on and move to inclusion lists and not exclusion lists. Exclusion lists (known unfortunately as blacklists) are a game of whack-a-mole where the onus is on the marketer to identify unsuitable sites. Inclusion lists are the reverse, and the vetting can occur prior to a domain being added.?
  • Rethink the use of 1P data and 3P data segments, or at least vet their provenance and analyse the cost v benefit equation. Most brands found on trash domains aren’t there because they want to be, they’re there because someone who advises them has decided it’s the only way they can “activate data segments” at any scale. Data segments need a label like food has for nutrition - this is a hundred million dollar industry where no one knows what they’re ingesting
  • Once this activity is cleaned up, run suppression tests in large markets (Adelaide, Geelong, Newcastle etc) to understand incremental contribution of this specific “data driven” activity. Is it incremental or is it misattributing activities/sales that would have happened due to another factor
  • It needs to be noted that ‘cookieless’ solutions will also suffer from this challenge, especially around data and ability to match (as they require the same ultimate mode of delivery)

This is not a bash on programmatic or data. I am advocate of what programmatic technologies can do to improve delivery of ads. I am an advocate of what targeting can deliver and how data can be used to improve advertising efficacy. This is intended as something that explains the unintended consequences that have become prevalent and remain generally unsolved as a result of this matched model.

Sebastian Graham

Director, Agency Development APAC

7 个月

Best article on this that I’ve read Ben. I 100% agree with every point.

Jonathan Pangu

Head of Marketing & Consultant

7 个月

good read, thanks Ben. What will the consequences for Forbes be do you think? I note, with some incredulity, that they themselves have penned articles on the danger on digital ad fraud ??♂? https://www.forbes.com/sites/forbesagencycouncil/2023/11/07/ad-fraud-the-biggest-threat-to-the-advertising-industry/?sh=5094548e1773

回复
Ben Shepherd

Advertising, marketing + Media. Subscribe to Signal. Currently building what's next.

7 个月

Tim Burrowes relevant to your piece today

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了