Misinformation and Moderation in Social Media: Implications for Publishing Platforms in the Era of Fake News
Fake news is no longer new news—but that doesn’t mean the media landscape around truth has stopped evolving.
In the current environment of misinformation, social media users and marketers have begun to challenge how social platforms handle misinformation and fake news, particularly as it contributes to hate speech. Most notable is the boycott of advertising in response to Facebook’s permissive approach to posts from President Donald Trump, run via his campaign manager’s page as a means of circumnavigating the political advertising policy. Brands are now faced with the decision to take a stand on the allowance of such content through their advertising behavior going forward.
The “Stop Hate for Profit” initiative to boycott advertising on Facebook and other social platforms, launched just weeks ago, is spearheaded by a coalition that includes Color of Change, the Anti-Defamation League, the NAACP and other civil rights and advocacy groups. According to the coalition, Facebook has allowed speech that "incites violence against protesters fighting for racial justice" and calls on corporate America to stand up against Facebook's policies. While some companies have only stated their intent to pause Facebook ads through July, many others have already promised to halt social activity across multiple platforms through the end of the year—or even until meaningful action is taken. With advertising accounting for more than 98% of Facebook’s nearly $70B revenue in 2019—and an estimated $70M (and counting) loss in advertising revenue for the month of July—this sparked an immediate reaction.
It remains to be seen what impact this will have on Facebook’s policies. Following meetings this week with members of the coalition, representatives were quoted as saying “[Facebook] showed up expecting an ‘A’ for attendance” and “approached our meeting today like it was nothing more than a PR exercise.” Yet Zuckerberg has said he’s not too worried about the boycott.
The truth is supposed to be absolute—but who is the arbiter on social platforms?
While debate around platform-regulated content moderation is not new, Twitter’s stance on a manipulated video of former Vice President Joe Biden endorsing Trump—and the subsequent expansion of their warning system to cover misinformation about COVID-19 and fraudulent tweets about mail-in ballots—added enough fuel to the fire to spark a reaction from Trump. In response to these actions from Twitter, in late May he issued the Executive Order on Preventing Online Censorship. The Order accuses online platforms of “engaging in selective censorship that is harming our national discourse” and Twitter of political bias in the warning labels—a clear escalation in the confrontation between Twitter and the President.
By contrast, Facebook—where Trump made identical posts, and advertises aggressively—has focused on a policy of noninterference, choosing not to act on the same posts Twitter was flagging. CEO Mark Zuckerberg said, “Facebook shouldn’t be the arbiter of truth of everything that people say online.” In response, hundreds of Facebook employees staged a virtual walkout in protest of the company’s stance. By the time Zuckerberg defended his decision the next day, there were already several public resignations and an open letter published in the New York Times.
While Facebook did announce it would begin labelling and removing posts that incite violence or suppress voting late last month, with many other large social media platforms such as YouTube and Reddit following suit, the platform has not fully addressed the recommendations from #StopHateForProfit.
The moderation of social content has a domino effect—and we ultimately determine the direction.
The questions raised by the conversation around content moderation span far beyond just Facebook and Twitter—or even other social and paid media platforms. All platforms will ultimately have to consider how to regulate what users post. And as their regulations potentially change, will social media become so benign that it loses its appeal? Or alternatively, will social media skew towards being the Wild West of content?
Beyond just social and paid media platforms, think of the balance of power traditional media stands to gain or lose as this situation works itself out. Do social media platforms like Facebook and Twitter now become publishers? A source of truth?
Some argue that it may be time for algorithmic transparency. I question if that would really change things.
For now, all these platforms are left to self-regulate. So, is it time to take more extreme measures, such as those powered by artificial intelligence?
But this isn’t about technology. It’s about what we as a society will allow.
One of my favorite quotes is from Martin Luther King Jr., “The time is always right to do what is right.”
So, I guess the real question is—what is right?
Rob Oquendo is the Chief Innovation Officer at Spectrum Science, an integrated marketing, communications and media agency hyper-focused on science.
AI Curious Corporate Communications Expert with 16 Years of Experience: Crafting Stories, Managing Crises, Elevating Executives, and Nurturing Engagement
4 年Thanks for bringing up this important issue. It can be a fine line to walk for politicians who want to benefit from the "lawlessness" of these platforms while also seeking to condemn them.
IT Management Professional with track record of success in executive leadership
4 年Rob Oquendo Hey Rob - great article, definitely thought provoking. While I don't know what the right answer may be; I am sure that the time is right to do something.