It's been 24 years since Internet companies were declared off-the-hook for the behavior of their users. That may change, and soon.
(Cross posted from Signal360)
In a sweeping talk at the Association of National Advertisers conference last month, P&G Chief Brand Officer (and ANA Chair) Marc Pritchard laid out a five-step plan to address systemic problems in the marketing and media industries. Each step addresses serious challenges and opportunities — in diversity, inequality, and creative and business practices. But perhaps no step is more challenging — and crucial — than Pritchard’s Step Four: Eliminating all harmful content online.
“There is still too much harmful, hateful, denigrating, and discriminatory content and commentary in too many digital sites, channels, and feeds,” Pritchard said. “There is no place for this type of content.”
While nearly everyone agrees with the idea of eliminating harmful content, key actors across the digital media industry seem paralyzed when it comes to how best to take action on the problem. What’s really going on? To understand, we must dive into the early formation of the Internet industry in the United States, and the role the First Amendment plays — to this day — in shaping an increasingly contentious debate on how to regulate digital speech.
But, First, a Bit of History
When the Internet was in its early stages as a commercial medium more than 25 years ago, a moral panic erupted in the United States following the publication of a Time magazine cover story Titled “Cyberporn” and featuring a terrified child staring aghast into the blue light of a computer monitor, the story claimed — falsely, as it turned out — that the majority of images on the then-novel medium consisted of pornography.
Internet service providers were to be treated like the phone company … not held responsible for the speech of their customers.
Congress quickly took up the cause of cleaning up the Internet and passed the Communications Decency Act of 1996, which banned the transmission of “obscene or indecent” content in a manner that might be seen by children. Inspired by the decency standards applied to the television broadcasting industry nearly fifty years before, the legislation also included an important liability exception for hosting or transmission of indecent content: Internet service providers were to be treated like the phone company, and would not be held responsible for the speech of their customers.
This landmark legislation left two crucial legacies for today’s modern Internet: First, it sparked massive protests over the definition of “obscene and indecent” material, culminating in a 1997 Supreme Court decision overturning the bulk of the Communications Decency Act on First Amendment grounds. And secondly, “Section 230” — as the liability protections were commonly known — would become canonized into a nearly sacrosanct place in Internet law.
In that two-year period from 1995 to 1997, the groundwork was laid for the rise of the modern Internet — a public square with minimal constraints on speech, driven by a robust commercial business model driven, in large part, by advertising. Section 230 insured that providers of Internet services would not be held accountable for whatever impact speech might have on business and society. Today’s Internet giants — including Google, YouTube, Facebook, and Twitter — flourished as a result.
Whose Speech, Which Freedoms?
A quarter century later, however, business leaders, policy makers, and even the leaders of Internet platforms are reconsidering the role of speech in civil society. In his speech to the ANA, P&G’s Pritchard reminded his audience that the FCC eventually came to terms with the nascent broadcasting industry, setting decency and other key standards early in the medium’s development. Not so for the Internet: “It’s astounding that thirty years into the Internet age with $300 billion of media spending monetized,” Pritchard said, “we are still operating with very few boundaries other than those that are self-imposed or that marketers try to enforce.”
The platforms have been locked in an increasingly contentious standoff over self-regulation.
If the industry doesn’t solve the problem, the government may soon try its hand. Politicians on both sides of the aisle are already calling for reforms to Section 230. President Donald Trump has signaled his intention to reform Section 230 by signing an executive order asking federal agencies to “reinterpret” the law, and Democratic candidate Joe Biden has gone one step further, calling multiple times for the full revocation of the statute. While the two candidates have quite distinct political goals in attacking Section 230 — Trump is angry that Twitter and Facebook have labeled his posts as untruthful, and Biden seeks to bring misinformation to heel across the Internet — the resulting debate will certainly find a prominent place in either candidates’ administration next year.
Stuck in the middle of the debate are Internet giants like Google and Facebook and their major advertising clients, like P&G. For nearly a decade, the platforms have been locked in an increasingly contentious standoff over self-regulation, culminating in ongoing work through the Global Alliance for Responsible Media (GARM). GARM meets next to address these issues in mid-October. It will certainly have to grapple with the state of play in the US, but that “G” does stand for “Global” — and plenty of other countries also will be weighing in on this issue in the coming year.
While many point out that a standoff over regulation benefits the platforms’ business, Pritchard and his colleagues have every right to demand brand-safe media. And the Internet platforms feel right to declare themselves neutral when it comes to enforcing what constitutes appropriate speech — they’d prefer the government lay down the law, not Mark Zuckerberg. On that point, it’s hard not to find agreement.
A Way Forward?
Many marketing leaders are increasingly tired of waiting for the government or industry to act, and have begun to express their displeasure by joining month-long boycotts or abandoning social media altogether. But the marketing industry is now dependent on the power of the Internet platforms to drive their business results — and to not leverage data-driven, identity-powered marketing tools is to cede competitive advantage in the marketplace.
It’s clearly possible for digital platforms to build systems and processes which create safe, well-lit places for brands.
Instead, P&G’s Pritchard has pointed to how his company negotiated with its partner YouTube a few years back, establishing safe-listed channels validated by third party audits. That process proved that it’s clearly possible for digital platforms to build systems and processes which create safe, well-lit places for brands — regardless of whether or not the government decrees them to do so.
Competition might compel those same platforms to action as well. Newer players like Snap, TikTok, and others are already touting their relatively brand-safe environments, and we can expect more of the same to come in the future, including programmatic offerings from federations of smaller digital publishers, most of whom are “brand safe” from the start. As Pritchard points out in his call to action, there is much work left to do. But when it comes to cleaning up harmful content in the media supply chain, 2021 could well be one for the history books.