Reddit’s zero-tolerance policy shows how to deal with hate speech
Enrique Dans
Senior Advisor for Innovation and Digital Transformation at IE University. Changing education to change the world...
On June 29, Reddit launched its new content policy, which has resulted in the closure of almost 7,000 threads (known as subreddits), some as famous as The_Donald, which offered a forum for some 800,000 Donald Trump fans to create, share and spread conspiracy theories, along with racist, misogynist, Islamophobic and anti-Semitic content.
Almost two months later, the company has published statistics that show a significant reduction of 18% in the number of comments qualified as hate speech, eliminated by human moderators or by AutoModerator, its artificial intelligence tool. The removed subreddits were visited by about 365,000 users every day, and fell into three categories: forums with names or descriptions that directly alluded to hate speech, those with a large amount of such content, or those that shared such content.
The results of Reddit’s tougher policy are extremely encouraging: while many users have tried to continue to encourage hate speech by creating more sophisticated content, it’s clear that providing moderators with clear guidelines and removing forums where their moderators are not active works, clearly contributing to the health of the conversation.
The fact that such a change is working in an environment like Reddit, which is heavily used and has a completely open username policy, whereby users can call themselves whatever they want and create whatever accounts they want, shows that hate speech is not necessarily about removing anonymity. Some people’s obsession with checking accounts to ensure that they belong to a particular person is often a way of constraining, conditioning and impoverishing the conversation; in many cases, some people simply cannot express their opinions due to either social pressure or the likely repercussions of revealing their identity.
Ending hate speech means designing clear mechanisms that allow anyone to report, eliminate and cancel accounts associated with it, and these accounts will be systematically eliminated time and again, regardless of how often they seek to reappear in a new guise. It is a question of clarity, and above all, of not prioritizing, as some social networks clearly do, traffic or income derived from hate speech. It is simply a matter of excluding from the conversation those who are not capable of behaving appropriately.
Reddit has been one of the most toxic pages on the web for a long time, but the company is now clearly on the right track. Let’s hope it sets an example.
(En espa?ol, aquí)
?