Can Advertising Destroy Hate?
When it comes to fixing bigotry, harassment and toxicity in media and social media, the buck may just stop with advertisers.
When the UN investigated the possible genocide of Rohingya Muslims in Myanmar in the spring of 2017, the instigating cause they found wasn’t a struggle for power between warlords or ancient tribal factions. It was the second largest advertising platform in the world.
Yanghee Lee, lead UN investigator in Myanmar, explained how “ultra-nationalist Buddhists have their own Facebooks and are really inciting a lot of violence and a lot of hatred against the Rohingya or other ethnic minorities."If you’re willing to go down the rabbit hole another few feet, you can easily draw the conclusion that the world’s biggest advertisers financed the killing of thousands in Myanmar. Shocking as that might seem, it’s not that much of a stretch.
Late in 2016, alarmed by the rising tide of hate in the country, I visited Breitbart News for the first time. The articles printed there, like “Hoist It High and Proud: The Confederate Flag Proclaims a Glorious Heritage” and “Birth Control Makes Women Unattractive and Crazy,” were incredibly shocking to me. But to someone who had been in the advertising business for 25 years, what was more upsetting was that there were ads for major corporations running just next to them.
As it turned out, programmatic advertising, the complex and opaque system by which ads are bought on the Internet, was the culprit. Advertising giants like Google and Facebook take an advertiser’s money and then spread their ads to sites on their advertising networks. Breitbart was, and still is, one of those sites.
Days later, I opened an anonymous Twitter account with the admittedly odd name of Sleeping Giants, took a screenshot of a company’s ad next to one of those egregiously horrible articles and tweeted it to their corporate Twitter account. Surprisingly, they got back to me in short order. But their answer was equally surprising: They had zero idea they were advertising on that site, and they would take measures to remove the ad immediately.
Nearly two and a half years later, Sleeping Giants, now with over 325,000 followers and chapters in 11 countries and territories, has notified thousands of advertisers about their placement on Breitbart. Over 90% have thanked us for letting them know and over 4,200 of them have chosen to stop advertising on the site. Despite clear evidence that Breitbart is breaking the Terms of Service for their advertising networks, Google, Facebook and even Amazon continue to serve their customers’ ads to the site without their knowledge.
This isn’t just an isolated problem. It is endemic to the entire online media and advertising ecosystem.
We are currently in the center of a media explosion. Three hundred hours of video content are uploaded to YouTube every minute. On Twitter, 6,000 tweets are sent every second. There are 1.5 billion websites on the Internet. How could anyone, even these massive social media companies, ever monitor this volume of information?
The answer is quite simple: They can’t. Especially when it doesn’t pay for them to do so. The more outrage, the more clicks. The more clicks, the more money. The more money, the happier the investors happen to become. Their business model requires it.
It is this lack of oversight and the laissez-faire attitude towards content moderation that enable the worst elements to spread like wildfire on and, yes, off these social platforms. White supremacists, misogynists, and conspiracy theorists are able to abuse these platforms, and the people who use them, with impunity. So it’s no wonder that incidents like 300 companies, including adidas and 20th Century Fox, land on neo-Nazi and conspiracy videos on YouTube like they did just last year and companies like AT&T and Disney land on videos featuring pedophilia comments as of just a few weeks ago.
For a long time, social networks were able to get away with this. They could continue to host metric tons of content from both legitimate and potentially dubious sources while also collecting billions of dollars in ad money. And, frankly, they still do. But their boilerplate argument that they’ve leaned on time-and-time-again for years, one pertaining to “free speech”, is starting to crumble under scrutiny.
The neverending drumbeat of “free speech” arguments is a dishonest one that benefits both the social networks and those who use them to abuse others. For foreign governments who would like to use the platforms to push discord, white supremacist groups looking to recruit new members and conspiracy theorists who are looking to push their ideas on a wider audience, it is a way to “work the ref” to be kept on the platforms. For social networks, it is just an excuse to get more clicks and more engagement. But it obscures the obvious: These are businesses whose only goal is to make money for their shareholders through advertising dollars. Free speech protects us from our government. It doesn’t apply to social networks one iota.
So how does this change? How do you hold social media companies with no competition, no discernable moral or ethical responsibility, and no real impetus to change...to change?
Follow the money.
We are finding out quickly that, as users of social platforms, we are the product. Our data is sold to the highest bidder. Unless we all quit en masse, which is highly unlikely, we don’t have a vote. After all, Kentucky Fried Chicken doesn’t ask for input from their chickens on how to run their business. But in absence of our voices being heard, there are others who do have a seat at the table: The ones paying the bills.
Advertisers, for a long time, have had two main measurements of media: Reach and Frequency. It’s time for another: Responsibility.
As we’ve found over the last two years of our campaign, brands generally want to be associated with the positive. As they want to attract as many different consumers as possible, they don’t like to be adjacent to content that denigrates others. Many of these corporations depend on customers and employees from various backgrounds, so they don’t want to support messages that divide people based on race, religion or sexual orientation, either. So what’s good for a brand is also good for society.
Everything that we interact with today, from Instagram posts to apps on our phone to our emails, are paid for with ad dollars. Everything. So if you’re looking for anyone to stem the tide of bigotry and harassment in our media ecosystem, it might just be the advertisers that could hold the key.
Advertisers heading into the turbulent social media waters now should ask themselves the same questions that they would use in spending their dollars on other media sources:
Would we advertise on a television show that recruits white supremacists?
Is it a good idea to sponsor a radio program that encourages harassment of individuals and broadcasts their home address?
How about an anti-woman podcast?
Should we sponsor a TV series by a foreign adversary?
Recent research by the CMO Council suggests that 48% of consumers believe that an ad adjacent to negative content constitutes an endorsement.
Research on consumer behavior related to brand adjacency provides more than solid ground for advertisers to demand wholesale changes to social platforms when no one else can.
Among these demands should be, for the first time, clear, explicit and unequivocal Terms of Service on hate, harassment, the exposure of personal information and disinformation along with simple-to-follow enforcement policies. The current rules allow for too much grey area in interpretation, causing consternation from anyone who is banned without sufficient explanation.
Just two weeks ago, in fact, many of New Zealand's major advertisers pledged to leave Facebook after a gunman slaughtered 51 Muslims who were in the middle of prayer at their mosque in Christchurch and live-streamed the entire horrific attack on the platform. And who can blame them? Because of the carelessness and opacity that Facebook provides their advertisers, any of those brands could have appeared next to that video. Just this week, Facebook announced limits to their live-streaming function, ensuring that only certain accounts will be able to access it. This can and should be a model for how advertisers hold social platforms responsible in the future.
Advertisers should also finally hold platforms accountable for troll farms and “bots”, which not only cause a toxic, divisive environment, but artificially inflate numbers for advertising rates.
Without question, advertisers should also insist on a multiple of the current numbers of moderators on these platforms, as it takes days, sometimes weeks, for those who have reported violations of Community Standards to see any action taken.
This is not to say this will be an easy task, as social media executives like Mark Zuckerberg and Jack Dorsey have proven in evasive-at-best testimony to Congress and in the press, have little desire to take a holistic approach to the tremendous problems on their platforms. Add to it that there are fewer places today to reach consumers and it will take a concerted effort to turn the ship.
But make no mistake, change is coming. If the week-to-week revelations about the breaches of trust by social media companies continue, intervention will happen either by government regulation or competition. It would behoove these companies to begin to take responsibility for the abuse of their platforms on their own.
Until then, no one is in a better position to force these changes than those that keep the lights on: The advertisers. We are all counting on it.
Just ask the Rohingya in Myanmar.
Owner @ Raisin Rural Development LLC | Sustainable Development
5 年The hate in Hollywood, from Don Lemon (CNN) and shows like the View it seems to me the lines are very grey between where advertisers are working with how 'hate' is defined on a case by case basis.
No, advertising cannot destroy hate. But it can take an alternative political viewpoint and reposition it as hate. Rather clumsily, as is the case with this article.
Sr. Manager Experience Design at Charles Schwab
5 年Apparently their commitment is tentative.
Eran Thomson ?Writer ?Director ?Creative Director ?Joy Pusher ★ Working with Agencies, Production Cos, Consultants, Funded Startups & Brands In-House
5 年Preach brother. Nice work.
Board Member and Advisor to Non-Profit Organizations; also Professional Voice Actor.
5 年Hi Matt- ?Thanks so much for this and for all you do to fight against toxicity in social media. ?But this piece left me hungry for specific suggestions from you for how Joe and Jane Citizens like me can support your cause...