Ending Platform Hate: It's Not too Late
This week, as Twitch joins the other social media platforms' descent into lawlessness and community pressure to change its ways, I propose all social media adopt a common set of governing principles that platforms must use if they continue to offer free content to all members.
Until now, the platforms eschewed enforcing rules to increase traffic, thereby increasing advertising revenue. Without rules, abhorrent behavior, descrimination, hacks, political espionage, and hate speech (along with outright fraud) flourish. The following principles, which apply to every type of social media platform, will cover basic rights of participants to engage in free speech, and the platform to arbitrate hate speech while upholding some standards of decorum. Versions of these simple rules are known and used by every working online community manager and moderator.
They are:
- Constructive dialogue is welcome here.
- Bullying, trolling, hate speech, outright threats, veiled threats, and any other method to impugn someone else’s safety is not tolerated.
- Please be vigilant and mindful in your participation here. Proceed at your own risk. Please think before you post. Do not either post attacks or provoke attacks.
- This platform reserves the right to take action--suppression, suspension, expulsion, or, in extreme cases, notifying law enforcement, of any participant who violates Rule #2, at our discretion and in accordance with our Terms of Service. We also reserve the right to change our Terms at any time to meet the changing needs of this platform.
- If you see content which violates our rules, please go here <link> to report it. Our moderation team will review your report within XX business days.
Twitch, Gamergate, Facebook, Twitter, YouTube, Instagram and other social media platform past controversies prove that in the absence of adequate rules and enforcement, people get hurt. The above principles not only allow for free speech: “dialogue is welcome here,” but also put equal accountability on participants to take part responsibly: “be vigilant and mindful.” The principles also allow the platforms themselves to define nuances of hate speech, attacks, and provocation in their Terms of Service, which, typically, all members agree to before joining.
Facebook just settled a landmark case in which they paid contract moderators $52M for post traumatic stress disorder for viewing and moderating disgusting content. How might the above rules have protected the company from explicit content and the ensuing pain endured by the moderators?
The nuances of my proposal lie in Rules #4 and #5. Who and how content is adjudicated, whether by human eyes or artificial intelligence, will be critical. The backbone of rule #4 will have to be created with diversity and sensitivity, along with a clear underpinning of enforcement rules, edge cases, and exceptions. The greater the transparency of the backbone, the more trustworthy it will be.
Rule #5 depends entirely on a platform’s willingness to power a live complaints box and provide a clear service level agreement to all members as they join. Platforms should use a simple algorithm to staff this “inbox” based on volume and severity of reports. The more there are, the more resources need to be allocated to adjudicate.
It's not too late to save the platforms which have brought hundreds of millions of us closer to the distant and local worlds around us. By adopting a universal set of principles to govern, the platforms will regain customer and advertiser trust and usage. Platforms: please come together to create a combined and uniform mission against hate.
Shira Levine is a strategic marketing consultant living in Melbourne, Australia. For more information, contact her on LinkedIn or at Fanchismo.com.
Graduate Student at Lewis & Clark | Past life: Community Builder
4 年Fantastic! Have a look Annemarie Dooling.
Executive Coach | Mentor | Facilitator
4 年YES!