The Battle Over Social Content
Aimee Meester
Chief Marketing Officer | 40 Under 40 Recipient | Forbes Business Council | Integrated Marketing Strategist | Client Experience Leader | Business Growth Architect | Speaker & Writer
In 1996, when most people didn’t fully grasp what the Internet was, a law was passed called the Communications Decency Act. Section 230 of that law included the phrase “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” That sentence has been under fire recently from a myriad of political and social angles. Here’s why.
What Section 230 Means
This all started in the 1990s when Stratton Oakmont (the brokerage firm headed by Leonardo DiCaprio in “The Wolf of Wall Street”) sued an internet service provider for defamation. An anonymous user had written on Prodigy’s message boards about the financial malfeasance of Stratton Oakmont, and the brokerage firm took Prodigy to task.
The New York Supreme court ruled that Prodigy was responsible for the posts published on its forums because it had moderated some posts and imposed rules, implying a conscious decision to leave the anonymous user’s post up. The ruling caught the eye of lawmakers, and the law passed six years later.
That short clause has been hailed as one of the most instrumental cornerstones of today’s internet. The reason that multi-billion-dollar platforms like Twitter, Facebook, and Google have been able to host content from billions of people across the globe is that they’re not considered to be the “publishers” of the information they host. They’re allowed to set rules about their content, moderate their posts, and even ban some users and groups outright, but they’re not held legally accountable when their users post controversial materials online.
Section 230 Under Siege
The law has been under attack from both sides of the political spectrum, but for very different reasons. For years, conservatives have been accusing major tech platforms of censoring or repressing their viewpoints and maintain that the law should be repealed as a result. They argue that the spirit of the law is to create a neutral platform for free speech and that by disproportionately de-platforming conservative views, the tech giants are violating that spirit.
On the other side of the aisle, liberals have argued that these sites are allowing racist, misogynist, homophobic, and otherwise harassing content to be posted without moderation.
Where Marketers Come In
Tech executives have been called before Congress to answer for the content they allow to exist on their platforms, but the law is arguably not the primary incentive pushing these tech titans to adjust their policies.
In 2020, Facebook is projected to bring in just shy of 81 billion dollars in ad revenue. Google is well behind them with about $40 billion, and Twitter makes about $2 billion of its own. These three platforms, along with Instagram, Snapchat, and a few dozen others of various sizes, make up the majority of the $124 billion spent on online advertising every year — banner ads and other placements are a tiny portion of the total.
In 2020, with the extreme social unrest and political upheaval that the U.S. experienced, some advertisers decided to pull their advertising. Pressed by the Anti-Defamation League, the National Association for the Advancement of Colored People, Color of Change and other groups, hundreds of companies including Unilever and Best Buy pulled their advertising from Facebook over their perceived failure to properly police hate speech and misinformation.
More big companies followed. Starbucks, Diageo, Levi Strauss, Honda, and Patagonia all pulled their ads in the summer of 2020. While these boycotts generate a lot of attention, it’s hard to tell whether they’re actually making a difference. Facebook’s stock dropped in March, but has since rebounded and is up 33 percent on the year. The fact is that while only a few companies shell out millions on Facebook ads, there are millions of smaller companies that rely on Facebook for its reach and targeting power.
Still, the tides seem to be shifting. On October 21, TikTok said it would be stepping up its efforts to get rid of hateful content like neo-Nazi and white supremacist ideologies, male supremacy and Identitarianism, and content that’s hurtful to the LGBTQ+ community. Facebook banned the far-right conspiracy group QAnon across its platforms a month before the presidential election. And Twitter banned all political advertising a year ago, prompting Facebook employees to entreat Zuckerburg to do the same (a request he denied).
So the question remains: who should be responsible for the content that gains traction on social media? Should content be dictated law or by the free market? And should the executives of major tech sites be held responsible for the way their users utilize the forums?
Platform Neutrality is Vital
Social media platforms aren’t traditional publishers, and it doesn’t make sense to treat them that way. While it is entirely feasible for a publisher like the New York Times to moderate and investigate stories and messages from their roster of journalists, it’s unrealistic for Facebook to moderate over 2.74 billion world-wide monthly active users.
The New York Post’s October 2020 story about the alleged contents of Hunter Biden’s laptop is a prime example of how social media companies lack the ability to effectively moderate content. Twitter’s reversal in their decision to block the story and subsequent policy adjustments demonstrate that without adequate information or time to investigate, they are depending on AI or individual moderators to make critical content decisions — both of which failed in this example. Users are left with questions about politically motivated censorship. Social platforms are simply not equipped to act as arbiters of truth.
Artificial intelligence systems designed to automate at least part of the moderation process is a stopgap solution at best. These systems often create as many problems as they solve, making it more difficult if not impossible for legitimate small- and medium-sized businesses to function in the social media space. As a marketing agency, we feel this impact directly. At least one of our clients is mistakenly flagged for advertising violations on a monthly basis, at which point we have to request a manual unlock of the account on their behalf, all because the AI system doesn’t work as intended. We have the resources, knowledge, and contacts to work around this issue, but millions of others don’t. In the end, maintenance of content neutrality standards among the social media platforms is essential not only to ensure fair usage standards and avoid potentially costly censorship errors like those our client frequently experience but also to make sure we aren’t granting social media platforms the authority to determine what information is appropriate for the public to consume. Outside of limiting content for clear violations of law or free speech limitations established in judicial precedent, the users of these platforms are capable of making that determination for themselves.