How to stop advertisers from unwittingly funding disinformation
iStock/ Alina Kolyuka

How to stop advertisers from unwittingly funding disinformation

Online ads from top global firms keep appearing alongside dangerous mis- and disinformation. But firms deny responsibility for financing harmful content. To blame is an opaque system for buying and selling ad space known as programmatic advertising. It’s high time for a rethink.

Ads are everywhere online. Leaping out at us from websites, littering our social media feeds, interrupting our favorite shows or podcasts. But none of us are seeing the same commercials. Algorithms decide what we see, and when, based on data collected about us.

So, how does it work?

High on my summer reading list was a book called The Death of Truth by Steven Brill. Essential reading for all those battling mis- and disinformation , it offers a deep dive on the forces and structures that fund and disseminate potentially harmful content — and what to do about them.

One line of investigation caught my eye: the problem of ads placed through programmatic advertising systems. Invented in the late 1990s, this is now the dominant means for online advertising. For years now, the practice has been inadvertently funding mis- and disinformation.

In his book, Brill lays out a simplified case study showing how a firm’s advertising campaign is typically unleashed on the internet via mechanisms known as demand and supply side platforms — think of it like a stock exchange for buying and selling online advertising space.

Launching an ad online via the platform involves a complex set of choices. The aim for ad agencies is to reach their target audience at the best price. They are not looking for specific places to advertise. Instead, they bid for access to targets — us internet users.

Agencies begin by defining their target audience for their campaign. Vast amounts of user data allow advertisers to filter targets by a wide range of characteristics — sex, income, education level, place of residence, interests, political and religious affiliation, or browsing history.

Next, agencies place a bid on the platform based on the maximum their client is willing to pay to reach that target audience over a certain period across tens of thousands of sites. This is measured in impressions — the number of times the ad is seen by a target in a set time period.

The “ad stock exchange” then canvasses available advertising space across millions of sites around the web. Within seconds, it returns with the cheapest offer — an instant auction on a colossal scale. Algorithms decide where ads are placed, and ultimately who profits.


Graphic: UN/Zachary Danz

Eye-watering sums are being spent on programmatic ads — an estimated $300 billion globally in 2023. Yet there is little to no human oversight of where that money lands.

Algorithms can’t distinguish for the most part between genuine local news sites and “pink slime ” sites posing as news to peddle disinformation. Algorithms can’t tell the difference between a United Nations public health campaign and a made for advertising site created to generate ad revenue.

What’s more, advertising experts have started to question the widely accepted narrative around the accuracy, usefulness and cost effectiveness of targeting based on behavioral tracking.

Some actions to address the problem have had unintended consequences. Blanket solutions — such as lists of banned words to filter out the worst sites from hosting ads — can end up being disadvantageous to reliable information sources, and undermine quality journalism.

So, what can be done about it? The tech platforms can do quite a bit!:

Enforce advertising policies. Platforms should establish, publicize and enforce clear and robust policies on advertising and the monetization of content. They should review existing publisher and advertising tech partnerships on an ongoing basis to assess whether such policies are upheld by partners in the ad tech supply chain. They should also publicly report annually on the effectiveness of policy enforcement and any other actions taken.

Demonstrate advertising transparency. Platforms should clearly mark all adverts, making information on the advertiser, the parameters used for targeting and any use of AI-generated or mediated content transparent to users. Maintain full, accessible, up-to-date and searchable advertising libraries with information on the source or purchaser, how much was spent and the target audience. Give detailed data to advertisers and researchers on exactly where adverts have appeared in any given timescale, and the accuracy and effectiveness of controls and services around advertising placements and brand adjacency. Undertake transparent reportingregarding revenue sources and sharing arrangements with advertisers and content creators. Clearly label all political advertising, including to indicate content that has been AI-generated or mediated, and provide easily accessible information on why recipients are being targeted, who paid for the adverts and how much.

Advertisers can also do their part. They can exert singular influence on the integrity of the information ecosystem by helping to cut off financial incentives for those seeking to profit from disinformation and hate. In doing so, advertisers can better protect their brands and address material risk, boosting their bottom line while conducting business in line with their corporate values.

Advertisers should demand transparency in digital advertising processes from the tech sector to help ensure that ad budgets do not inadvertently fund disinformation or hate or undermine human rights. They can require ad tech companies to publish criteria that a website or channel must adhere to before they are able to monetize.

Advertisers can require data. They can establish a full and detailed overview of advert adjacency on an ongoing basis, requiring granular data showing where adverts have appeared and conducting suitability reviews before advert placement. They should carry out thorough audits of advertising campaigns.

This will not only help strengthen information integrity, but also boosts their bottom line, helping them see a better return on investment for the brands they represent.

The programmatic advertising problem is just one of many issues we must address if we are to make a more humane information ecosystem — one that no longer incentivizes harmful misinformation, disinformation and hate speech, and promotes human rights for all.

My team and I recently launched the United Nations Global Principles for Information Integrity , a set of recommendations to guide action for a healthier information ecosystem. The Global Principles highlight the key role of all those involved in advertising. It is my great hope that they act as a beacon for change.

Alain Steinberg

Strategic Communications and Branding Expert | Managing Partner at Page in extremis

3 个月

This is such an eye-opening post! The intricacies of programmatic advertising and its unintended consequences on the spread of misinformation are indeed alarming. Steven Brill's The Death of Truth" sounds like a must-read for anyone interested in understanding and combating this issue. It's crucial for tech platforms and advertisers to take more responsibility and enforce transparency to ensure a healthier information ecosystem. The United Nations Global Principles for Information Integrity seem like a promising step forward. Thanks for sharing these insights! ??

回复
Leyla Werleigh-Pearson

Freelance International Consultant Humanitarian Aid, Conflict and Development

3 个月

I agree!

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了