The Age of Moderation:  How Online Tech Platforms are Navigating a Critical Dispensation

The Age of Moderation: How Online Tech Platforms are Navigating a Critical Dispensation

The most glaring challenge of the current dispensation in online tech platforms is not monetization, user data privacy, or user experience. All these remain important primary goals for the sustainability of platforms like Facebook, Twitter, YouTube, and others. Online safety is the general existential challenge of this era in the tech world, dating approximately three years ago. That is, how effectively the platforms define safety policies, identify safety issues, and implement content moderation mechanisms to safeguard against the emergent evils of online community engagements.

For many years, platforms have implemented a taxonomy of measures to curb the rise in hate speech, racism, misinformation, child sexual exploitation* and other pathologies inherent in online communities. These measures include the use of human content moderators (thousands of employees and/or sub-contracted firms sifting through content and taking action against violations), automated filtering (largely AI algorithms trained to identify, flag, and/or automatically censor content), and user-generated flags (regular users flagging other users' activity for onward review by content moderators).

For the most part, these measures have been effective in gatekeeping against online misuse, but the surge and intensity of violations spurred on by the Covid pandemic – with all the (alleged) misinformation and superstitions associated with the virus and the vaccines, has created a new landscape in online content moderation that has resulted in an existential crisis for the platforms.

How deep is the crisis?

Is online safety really an existential crisis for online tech platforms?

There is no easy answer to this question. Platforms have navigated through a slew of damning litigations over the last two decades - think of Facebook's Cambridge Analytica scandal or ByteDance's debacle with the US government over the protection of user data outside the US. Beyond any of these isolated challenges, online safety is a general and fundamental question across all platforms creating significant pressure from important stakeholders.

The generality* and fundamentality* of content moderation underline the platforms' commons problem. While the very DNA of online infrastructure is to open up mass access for democratic participation, this paves the way for widespread misuse. The moderation challenge is therefore how platforms walk the tightrope between free speech (or participation in general) and harmful content.

Beyond this commons problem, the surge in online misuse at the turn of the pandemic compounded the difficulty for the platforms. Facebook reported a 192% increase in hate speech removals (content flagged as hate speech and censored) from 2019 to 2020. This was followed by a 60% increase from 2020 to 2021. Additionally, Facebook also reported a 189% increase in bullying and harassment removals between 2020 and 2021.


No alt text provided for this image
The rise in incidences of hate speech, bullying, and harassment is apparent from Q4 2020. Notable general user activity around this period pertains to the pandemic, lockdowns, and the first release of Covid vaccines. The data provided by Facebook points to issues censored (flagged and removed) on their platform (rather than the total incidences of violations in these two categories). This increase in censored content is largely related to a surge in incidences than it is to an improvement in moderation mechanisms. Source: Meta on ? Statista

This increase in censored content on Facebook is correlated to an increase in hate speech, bullying, and harassment than it is related to a notable improvement in moderation mechanisms.

The pandemic aside, political conflict has widely spurred online misinformation, harassment, and violence. On 19 December 2020, Donald Trump rallied his supporters in a dispute over his presidential election defeat by Tweeting:

...Statistically impossible to have lost the 2020 election. Big protest in D.C. on January 6th. Be there, will be wild!

Following this digital town hall, thousands of people gathered to protest the election results and mobbed the U.S. Capitol resulting in five deaths and several injured including 140 police officers. A formal congressional inquiry later acknowledged the role Trump's tweet, with a reach of over 500 thousand within its' first 24 hours, played in the deadly violence. Of course, the election results remained unchanged in the aftermath of this violence, Trump was banned on several platforms, and the dead remained dead.

This, and other instances of online misinformation and misuse (pertaining to presidential elections around the world, climate change, wars, conflicts, and other issues) continue to polarise opinions and create real societal problems. The online discourse on the Russia-Ukraine conflict provides a good example of this. The pressure for platforms to ramp up moderation has never been more intense. Twitter's V.P of Trust and Safety Product, Ella Irwin reportedly claimed,

(Elon) Musk encouraged the team to worry less about how their actions would affect user growth or revenue, saying safety was the company's top priority

It's business unusual for online tech platforms. But what are the big factors shaping their content moderation strategies going forward? I unpack three key considerations below.

The increasing cost of (human) content moderation

Content moderation constitutes a significant overhead for platforms. The global content moderation services market was valued at over US$13 billion in 2022 and is projected to reach US$26.3 billion by 2031 (CAGR of 12.2%).


No alt text provided for this image
The global content moderation solutions market will continue to grow driven by the inexorable growth in UGC and increasing regulatory pressure. Source: Allied Market Research

Big tech companies subcontract moderation firms like Genpact, CPL, and others to handle the messiest component of moderation - human content moderation. Through these firms, Facebook and YouTube report that they have up to 15,000 and 20,000 people respectively working in their moderation teams globally. The cost for these operations runs into billions and will only increase driven by the inexorable growth in user-generated content (UGC) and the increasing regulatory pressure to improve moderation.

While it will not be possible to eliminate human moderation, platforms must chart a roadmap toward culling these operations. A potent approach to achieving this is enhancing automated content moderation.

There is a health and wellness side to this strategy as well. Content moderation takes a toll on the persons manually sifting through, and reviewing thousands of unpleasant content. This has been associated with the development of PTSD and other mental disorders among content moderators. As a result, platforms have had to settle millions of dollars to the affected persons and maintain this as a running obligation (Facebook paid up to US$52 million in a settlement in 2020).

The sustainability of human content moderation is questionable. The rise in cost will continue to stifle profitability, the limited scalability will fail to keep up with the surge in UGC, and most importantly the health toll on moderators is something that will not always be justifiable.

Growing pressure for more expansive algorithmic moderation

Algorithmic moderation is the future of online safety. This is not just a hunch based on a predisposition to digital solutionism. The use of machine learning algorithms to identify and censor online activity is the very definition of innovation and sustainability in this crisis.

While human moderation cannot be scaled in line with the growth in active users and UGC, algorithms can be applied to moderate over billions of user posts. In algorithmic moderation, hateful speech and other 'tagged' violations are detected and automatically censored. Where the algorithm has a 'low confidence' level on a potential violation, the content is subjected to visibility filtering and flagged for human review. Newer elements of automation involve tagging content with crowd-sourced descriptions clarifying the facts behind a post. This is especially helpful in the case of misinformation and fake news. AI can also proactively serve users with content that is in line with their preferences, reducing the chances of users receiving egregious content in their feeds.

While there is no such thing as a perfect algorithm anywhere, the focus for platforms will be to grow the scope of algorithmic moderation and eventually minimize human involvement. The current downsides to algorithmic moderation include algorithmic biases, especially in the context of AI recommendations. Content is not served uniformly across the agglomeration of online communities in line with network effects. These algorithmic biases can worsen polarisation and divisiveness. Another downside includes the failure of algorithms to account for cultural nuances (what is acceptable in one culture may not be so in another) and intent (when hate symbols or words are used in positive contexts they may be censored). Facebook has said,

AI has allowed us to proactively detect the vast majority of this content (that violates its policies) so we can remove it before anyone sees it. But its not perfect.

Regardless of these downsides, big tech companies should continue to invest in a combination of algorithmic and automated solutions to appropriately scale content moderation. Platforms should prioritize the operationalization of their digitalization roadmaps while continuously managing the AI risks involved.

A progressively sophisticated regulatory landscape

The disparate application of online platform regulations across different countries already poses a difficulty for big tech companies. In India for example, the law allows the government to order platforms to take down posts based on a wide range of violations, as well as to obtain the identity of the users. This is not permissible in the US, where freedom of speech is enshrined in its constitution. However, regulatory authorities around the world are increasingly calling for platforms to implement better moderation mechanisms.

While the platforms have been the arbiters of censorship over the years, there is increasing pressure for content moderation rules to be enforced by public authorities or regional blocs. Under the Digital Services Act (DSA), which is the EU's incoming content moderation law (expected August 2023), the platforms will be required to 'swiftly' take down illegal content, 'limit' disinformation and better protect minors from online exploitation. Any violations of this act could see platforms fined up to 6% of their annual global revenue.

Following a one-on-one meeting with Elon Musk in December 2022, French President Emmanuel Macron Tweeted,

Transparent user policies, significant reinforcement of content moderation and protection of freedom of speech: efforts have to be made by Twitter to comply with European regulations.

The stakes are high now, more than ever, for tech companies to be more accountable for what gets onto, and stays on their platforms. The threat of a ban on ByteDance's TikTok in the US (over the protection of US user data from Chinese authorities) is a testament to how committed countries are to enforcing laws on big tech companies.

Part of the solution to these regulatory pressures is for platforms to ensure that they have effective internal but shared governance frameworks to manage online safety. Meta and Twitter have independently instituted oversight boards for evaluating and enforcing moderation policies and rules. Going beyond this, shared frameworks will ensure that big tech platforms (regardless of being competitors in the online market space) have a unified approach, and bargaining power over an industry-wide crisis.

In addition to this, regionalized content moderation strategies coupled with rigorous stakeholder management will help manage relations and position platforms and governments as working together to improve online safety. Big Tech companies cannot afford to role-play cat-and-mouse with authorities. These partnerships will entail platforms becoming transparent about their policies, rules, and algorithms to garner trust.

In conclusion

The stakes are high for big tech in managing content moderation. Yet, this is not even a gamble. Platforms must make big investments in ensuring that online communities, regardless of the agglomerations, are safe from the pathologies of online engagement.

The current regime of human-in-the-loop solutions is not sustainable, AI algorithms are not perfect, and regulations are tightening. Platforms are not altogether helpless in this dispensation. The integrated approach towards moderation has already scored some big wins in eliminating, for example, terrorist propaganda from establishments like Al-Qaeda and ISIS. This legitimizes big tech's technical expertise and the role they play in protecting online communities.

The next critical stage is for big tech platforms to prove the promise of technological solutionism while delicately carrying the authorities and other stakeholders along.


Notes*

  1. An excellent award-winning academic read on 'Online Child Sexual Exploitation' can be found here, co-authored by my university Professors, Dr Dionysios Demetis , and Dr. Jan Kietzmann
  2. Generality - refers to content moderation as a foundational pillar for all platforms.
  3. Fundamentality - refers to content moderation being fundamental in ensuring the productivity of any community, including online.
  4. 'The Virtues of Moderation' by Dr. J. Grimmelmann is one of my favourite academic reads on the subject. You can find it here.


Please consider subscribing to my newsletter, 'Digital Bottomline' and get notified as soon as new articles come out. Through this platform, I aim to provide you with weekly insights on topical issues in the digital economy. Also consider following me here.

Obidence Ncube

Accounts Receivables Expert | Credit Risk Analytics | Data Analytics

1 年

Nice article, Great job!

Tapiwa Chipoyera

Head, Workforce Planning & Analytics - Zimbabwe Revenue Authority

1 年

Very insightful sir. Thank you very much. Even as new and smaller tech companies are being established, their founding principles and frameworks must include safety at their core. This proactive approach will avoid a less-than-acceptable firefighting modus operandi.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了