Curbing misinformation
It is well recognised that social media has been significantly abused, ranging from incidents of election manipulation, the spread of misinformation, disinformation, conspiracy theories, libel, and individual abuse. For instance, there have been cases where random individuals have libelled political figures without any valid reason. Additionally, reputation blackmail, where false claims are made against individuals or companies, and then they are coerced into paying for their removal, has become a concerning issue. The primary recourse for individuals facing such abuse is to file lawsuits against the perpetrators, which can be expensive and challenging due to the ease of creating fake identities on social media platforms.
To address this problem, a shift in the classification of social media companies is necessary. Currently, these companies claim to be "platforms" and argue that they are not responsible for the content shared on their platforms. They rely on legislation designed for telecommunications to support this claim. However, it is important to recognize the difference between social media companies and traditional telecommunication services. Social media platforms not only facilitate communication among small groups but also enable individuals to reach an unlimited number of people through their posts. In this regard, social media companies function as publishers and should be subject to the same legal obligations as publishers.
Publishers are held accountable for the content they publish, and if they disseminate defamatory or misleading information, they can be sued. For instance, when Fox News made baseless claims about rigged voting machines in the 2020 election, they faced lawsuits from the voting machine companies, resulting in significant financial consequences. Regulating social media companies as publishers would incentivize them to be more diligent in monitoring and filtering out libelous, misleading, and erroneous information. This would have a significant impact on curbing the spread of misinformation and abuse on social media platforms.
Objections have been raised to this solution. One argument is that social media companies also serve as communication platforms, allowing individuals to share information within small groups. To address this concern, a potential solution could be to exempt messages that are limited to groups smaller than a certain size, known as the "Dunbar Number." Anthropological studies have determined that the human brain is designed to manage social groups of 150 or fewer individuals. Therefore, any content circulated within groups smaller than the Dunbar Number could be exempt from regulation.
Another objection is that measures such as the above may cause social media companies to err on the side of caution which could result in the suppression of the wealth of legitimate and valuable information available on the Internet. There is a myriad of examples of this type of information. For example, to pick on two, where would we be without YouTube “how to” videos, or how would people cope without support groups for difficult medical conditions?
There are, however, a number of ways in which social media companies could address this concern. They could use crowd-sourced moderation (such is done by Wikipedia) supported by algorithms (e.g. Large Language Models). ?They could also require users to implement identity validation processes for account creation, such as providing a passport, driver's license, or phone number. This does not imply that this identify information would be made public, so would maintain a certain level of user anonymity but at the same time enabling the disclosure of identity to relevant authorities in cases involving libelous or malicious content. Additionally, social media companies can adjust their recommendation algorithms to slow down the spread of posts, providing further opportunity for the identification and prevention of invalid content.
There are concerns that regulating social media companies as publishers would infringe on freedom of speech, particularly citing the First Amendment of the United States constitution. However, this argument fails to consider that the proposal does not prevent views from being expressed but holds the publishers accountable for the content they disseminate.
In conclusion, to curb social media abuse, it is crucial to reclassify social media companies as publishers rather than mere platforms. This would impose legal responsibilities on these companies, encouraging them to take proactive measures in monitoring and filtering out harmful and misleading content. Exemptions based on group size and the implementation of identity validation processes while maintaining user anonymity can address potential concerns. By addressing objections and leveraging advanced technologies, social media platforms can foster a safer online environment without infringing on freedom of speech or suppressing valuable content. The assumption that regulating social media companies as publishers would hinder legitimate and valuable uses lacks evidence and ignores the possibility of implementing effective moderation strategies.