Self-Regulation of Social Media Companies and Hate Speech Laws
Murtaza Shaikh, PhD
Tackling Hate, Terrorism & Suicide through Online Safety Regulation at Ofcom | Ex-UN Advisor, Think Tank Founder & Rising Public Leader | Tech Regulation, Human Rights & Conflict Prevention
Excerpt from forthcoming book (June 2020) : "Incitement to Religious Hatred and Islamophobic Hate Speech under International Human Rights Law and UK Law" (Brill Nijhoff, Leiden)
Dr. Murtaza H. Shaikh
Online hate speech and in particular social media platforms form the most prevalent space for unrestricted speech. The sheer volume and severity of hate speech exhibited could be a result of a lack of regulation. Alternatively, it could be a telling reflection of genuine sentiments and grievances within society at any given time. Of the private companies that operate social media platforms, the most prominent are Twitter, Facebook and YouTube (owned by Google). Instagram is owned by Facebook and implements the same guidelines. The Home Affairs Select Committee (HASC) conducted an Inquiry into ‘Hate Crime and its Violent Consequences’ and took evidence from a broad range of actors relating to several types of hate crime, including Islamophobic ones.[1] Rather than report on the entirety of the evidence, it chose to narrowly focus on the phenomenon of online hate speech vis-à-vis social media companies.[2] The analysis below focuses on the evidence provided at the Inquiry and some recent developments.
One of the main factors in the rise of online hate speech as opposed to interpersonal hate speech has been the remote and virtual nature of social media expressions. The exponential growth of this form of communication has made it far easier to connect with more people with whom there is little or no interpersonal connection, let alone friendship. Supplemental to these factors is the level to which users can remain anonymous leading to more reckless behaviour due to a lack of fear of accountability. In this regard, the platform that affords the greatest level of anonymity is Twitter, which requires limited personal information in order to register and participate on the forum. Only a unverifiable user name and a mobile phone number are needed to begin posting tweets. This allows for aliases and multiple accounts to be held by one user. The actual identity of the user may not even be known to Twitter resulting in the prevalence of hateful and violent threats by antagonists without significant fear of accountability or even identification.[3] A report by the think tank, Demos monitored and documented the trends in Islamophobic tweets between March to July 2016 and recorded 215,236 such tweets in July 2016 alone equating to 289 per hour. There were notable peaks corresponding to acts of terrorism abroad. The highest peak was recorded following the Nice attacks and there was a gradual increase every month.[4] Some of the terms used in these tweets were: ‘sand flea’, ‘camel fucker’, ‘clitless’, ‘carpet pilot’, ‘diaper head’, ‘dune coon’, ‘dune nigger’, ‘sand monkey’, ‘slurpee nigger’, ‘Muslim paedos’, ‘muzrats’, ‘pisslam’, and ‘rapefugee’.[5]
These examples label Muslims as a whole with such descriptors based on their religious identity or appearance, which identify them as potentially Muslim. This is with the exception of ‘pisslam’ and ‘rapefugee’. One is aimed purely at the religion of Islam and the other hatred and stereotyping towards refugees, who are often portrayed as exclusively Muslim. Even then, it is hatred and prejudice that does not lend itself to straight forward categorisation within existing discrimination categories. It is certainly a form of xenophobia, but whether it crosses over into racial discrimination is an open question. Perhaps the strongest argument for doing so would be indirect discrimination on the basis of national identity. It is noteworthy that in addressing the proliferation of such insults aimed at Muslims, Twitter is well within the bounds of the law on incitement to religious hatred and therefore permits such hateful speech according to its guidelines. As the existing law only prohibits threatening words or behaviour intended to stir up hatred, any religious hatred that constitutes incitement to discrimination or hostility remains legal. This would not be the case with some terms, which meld conventional racism normally aimed towards black people with Islamophobic ones such as ‘dune coon’, ‘dune nigger’, ‘sand monkey’ and ‘slurpee nigger’. These would likely qualify as incitements to racial hatred as would other terms that conflate Islam with Arab ethnicity and culture. ‘Diaper head’ refers to the turban, which is only worn by a minority of Muslims in some countries. ‘Camel fucker’ and ‘sand nigger’ have little to do with Islam and more with the stereotypical association of Arabs with deserts and camels.
Twitter claims to have guidelines that regulate and restrict ‘hateful conduct’. This is misleading as once the guidelines are delved into, they define such conduct as purely limited to the promotion of violence. This is far narrower than what is implied by the label ‘hateful conduct’, which could manifest as religious, racial, misogynistic, homophobic or anti-Semitic intolerance or bigotry. Furthermore, even if there is a threat or promotion of violence against any of the protected characteristics and groups, it must be aimed at an individual and not generally at the group for it to violate Twitter’s own guidelines titled ‘The Twitter Rules’:
“You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.”[6]
Such a wide scope of allowing religiously hateful and violent speech has meant the use of hashtags such as #killallmuslims, #DeportallMuslims, #BanIslam. This was the case at the time that the HASC held an oral evidence session on social media guidelines and standards. At that session, the Twitter representative, Nick Pickles affirmed this policy of permitting hateful sentiments aimed at religious groups as a whole unless violence was threatened and a specific individual targeted:
“There is a distinction between taking an image and targeting somebody who perhaps identifies with the protected category in that image, and posting it on its own. Many of the accounts you raised with us highlighted people who were posting images to other people who perhaps belonged to those protected categories. In this one case, the tweet that was reported was not directly sent to anybody else, and while it was highly offensive it did not breach our rules around hateful conduct.”[7]
Since then the guidelines have been developed, it appears they have attempted to increase the protection offered:
“We prohibit content that wishes, hopes, promotes, or expresses a desire for death, serious and lasting bodily harm, or serious disease against an entire protected category and/or individuals who may be members of that category”.[8]
The aforementioned hashtag should be disallowed under this aspect of the guidelines. However, it can be seen that while groups as a whole are now included within ‘hateful conduct’, the type of speech that remains proscribed is that focused on violence and physical harm. This leaves open the possibility for hate speech targeting religious groups on any basis as long as it does not reach a level of threatening harm or violence. In this regard, a new section added to the Twitter Rules is the most progressive development and goes as far as:
“We prohibit targeting individuals with content intended to incite fear or spread fearful stereotypes about a protected category, including asserting that members of a protected category are more likely to take part in dangerous or illegal activities, e.g., “all [religious groups] are terrorists”.[9]
This may not extend the scope of hateful conduct to the point of any incitement to religious hatred aimed at the Muslim identity, it does nonetheless seek to protect specifically hate speech that promotes fear on the basis of stereotypes and generalisations. The fact that the example employed is one of the most common Islamophobic tropes is illustrative of this point. Whether such new rules are implemented robustly and the processes by which hateful conduct is reported or detected and then removed, remains to be seen.
YouTube, which is owned by Google, in its guidelines also defines prohibited expressions as “content that promotes violence or hatred”. The only elaboration provided is “There is a fine line between what is and what is not considered to be hate speech. For instance, it is generally acceptable to criticise a nation state, but not acceptable to post malicious, hateful comments about a group of people solely based on their race.”[10] This appears to be a response to existing or prospective accusations of allowing their platform to being used for anti-Semitic material. However, there is no further elaboration given of their definition of hate speech and whether, unlike Twitter, more than just violent threats aimed at individuals are prohibited. In oral evidence to the HASC, Peter Baron of Google specified, like Twitter, that their main concern was threats of violence aimed at individuals. This has resulted in allowing anti-Semitic videos by the white supremacist David Duke. Baron was challenged by the Chair, Yvette Cooper as to why a video titled ‘Jews admit organising white genocide’ did not violate YouTube’s guidelines:
“How on earth is the phrase, ‘Jews admit organising white genocide’, as well as being clearly false, not a statement that is a malicious or hateful comment about a group of people solely based on race, religion or the other protected characteristics that your own guidelines and community standards say are unacceptable?”.[11]
Baron explained that Google’s test for hate speech is “whether there is an incitement to violence against a particular identified group,”[12] owing to there being “no clear definition of hate speech in British law”.[13] This despite admitting the video in his opinion was “anti-Semitic, deeply offensive and shocking”.[14] David Duke continues to have a robust presence on YouTube and a prominent following with close to 88,000 subscribers.[15]
Unlike Twitter and YouTube, Facebook is the only company that makes the distinction between permitted hate towards ideas and restricting hate aimed towards identity. It states: “People can use Facebook to challenge ideas, institutions, and practices”, but not “hate speech...based on their: race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, or gender identity, or serious disabilities or diseases.”[16] Facebook also concedes that there is a need to update their guidelines as well as the need to develop speedy and effective enforcement mechanisms for disallowed content. They further acknowledge the need to undergo training for their staff.[17] Most recently Facebook has further overhauled its guidelines and now defines hate speech as:
“We define hate speech as a direct attack on people based on what we call protected characteristics - race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability. We also provide some protections for immigration status. We define attack as violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation. We separate attacks into three tiers of severity, as described below.”[18]
As such, Tier 1 relates to violent or dehumanising speech, Tier 2 to inferiority and Tier 3 to calls for exclusion or segregation. They support this with examples of each tier. This detailed and elaborate framework is the most advanced and best suited to address incitements to religious hatred and Islamophobia. The distinction between critiquing religious and other identities is clearly made. The types of hate speech and their content are not limited to purely inciting violence or targeting individuals but is inclusive of generalised negative tropes against Muslims and other groups with protected characteristics. This is stronger than the protection offered by Twitter, which focuses on the idea of fear inducing stereotypes such as criminality or terrorism. Even with such progressive and industry-leading self-regulatory guidelines, Facebook has extended beyond just regulating content and banned ‘dangerous individuals’ or ‘hate figures’ who “promote or engage in violence and hate, regardless of ideology.”[19] These are figureheads and Facebook-based broadcasters of hatred included Alex Jones, a right-wing conspiracy theorist; Milo Yiannopoulos, previously of the far right website Breitbart; Louis Farrakhan, the leader of Nation of Islam for anti-Semitism, Paul Nehlen, a white supremacist; and Laura Loomer, an anti-Islam activist.[20] Of these, all apart from Farrakhan have been prominent sources of Islamophobic content.
A connected and important aspect of social media regulation of hate speech was the capacity and speed with which enforcement mechanisms could operate for posts that violated their guidelines. When the HASC confronted all three social media giants with the continuing presence of material that violates their own guidelines and the lengthy delays in removing content that was complained off, the response illustrated that none of the companies had proactive means of searching for prohibited content. They instead solely relied on users raising complaints about content. Therefore, if content was not flagged, it could remain on the platform indefinitely and if a complaint was raised, until the reviewing team had the time to process it, the content would likewise remain on the platform. The legal question this raises is: if these platforms are allowed to carry expressions, which fall foul of the law, then who bears legal responsibility for them? Is it the original poster or the platform that permits or fails to remove it in a speedy manner? Would the platform have to institute filters to prevent offending material finding its way onto the platform for any period of time even if momentary? If not, then how long would it be permissible for?
These are difficult and highly complex legal questions in an area, which is for the most part unpoliced and unregulated. However, all three social media companies were adamant that filtering and pre-checking was not possible due to the sheer load of posts on their platforms. The best they could do prospectively was to develop advanced algorithms that could scan platform content rather than relying solely on user complaints. It was for these reasons that the HASC made the far-reaching observation and recommendation that a completely new legal framework was required to address such a contemporary and virtual medium for public debate:
“Most legal provisions in this field predate the era of mass social media use and some predate the internet itself. The Government should review the entire legislative framework governing online hate speech, harassment and extremism and ensure that the law is up to date.”[21]
In their final report, the HASC limited their findings to purely online hate speech and criticised the social media companies in question for not doing enough, as demonstrated above. However the HASC averted grappling with the greater issue of social media company guidelines being compliant with existing laws, especially relating to incitement to religious hatred. This was starkly evident in Peter Baron’s observation that the law on hate speech lacks clarity. Hence, without the law being strengthened, social media companies may be encouraged to improve self-regulation standards, but certainly not compelled to as a matter of law. In this regard, they cannot be expected to comply with international human rights law standards as found in ICCPR Art. 20(2), if the State’s law itself falls considerably short of it. Twitter and Google’s YouTube are arguably well within the law as their regulations to limit violent threats can be seen as complying with the law on incitement to religious hatred, which only governs threatening words or behaviour intended to stir up religious hatred. Since the Inquiry, Facebook and Twitter have both chosen to go beyond this to cover stereotyping through negative tropes. This has been necessitated by a sense of accountability to platform users as opposed to the law.
[1] Home Affairs Select Committee, Oral Evidence: Hate Crime and its violent consequences, HC 609, 13 December 2016. Witnesses: Bharath Ganesh & Fiyaz Mughal (Tell MAMA), Miqdaad Versi (Muslim Council of Britain), Dr Chris Allen (University of Birmingham), Dr Imran Awan (Birmingham City University), and Murtaza Shaikh (Co-Director, Averroes). <https://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/home-affairs-committee/hate-crime-and-its-violent-consequences/oral/44438.pdf> accessed 23 May 2019.
[2] Home Affairs Select Committee, Hate crime: abuse, hate and extremism online (HC 609, May 2017).
[3] See Hardaker, C. and McGlashan, M., ‘Real men don’t hate women: Twitter rape threats and group identity’ (2016) Journal of Pragmatics; Chaudhry, I., ‘# Hashtagging hate: Using Twitter to track racism online’ (2015) 20(2) First Monday; Jakubowicz, A., ‘Alt_Right White Lite: trolling, hate speech and cyber racism on social media’ (2017) 9(3) Cosmopolitan Civil Societies: An Interdisciplinary Journal, and Gagliardone, I., Gal, D., Alves, T. and Martinez, G., Countering Online Hate Speech (Unesco Publishing 2015).
[4] Demos Twitter Report.
[5] A fuller list can be found in the annex to the Demos Twitter Report.
[6] ‘The Twitter Rules’ (Twitter) <https://support.twitter.com/articles/18311> ‘Hateful conduct policy’ (Twitter) <https://help.twitter.com/en/rules-and-policies/hateful-conduct-policy> both accessed 23 May 2019.
[7] Home Affairs Select Committee, Oral Evidence: Hate Crime and its violent consequences (HC 609, 14 March 2017), Q438.
[8] ‘Hateful conduct policy’ (Twitter).
[9] Ibid.
[10] ‘Hate speech policy’ (Google YouTube) <https://support.google.com/youtube/answer/2801939?hl=en-GB> accessed 23 May 2019.
[11] HASC Social Media Oral Evidence, Q416.
[12] Ibid, Q416.
[13] Ibid, Q413.
[14] Ibid, Q407.
[15] David Duke YouTube Home Page <https://www.youtube.com/user/drdduke/videos> accessed 23 May 2019.
[16] ‘Community Standards’ (Facebook) <https://www.facebook.com/communitystandards> accessed 23 May 2019.
[17] ‘Controversial, Harmful and Hateful Speech on Facebook’ (Facebook, 28 May 2013) <https://www.facebook.com/notes/facebook-safety/controversial-harmful-and-hateful-speech-on-facebook/574430655911054/> accessed 23 May 2019.
[18] ‘Hate Speech’ (Facebook) <https://www.facebook.com/communitystandards/hate_speech> accessed 23 May 2019.
[19] Facebook Statement, cited in David Lee, ‘Facebook bans “dangerous individuals”’, BBC News (3 May 2019). <https://www.bbc.co.uk/news/technology-48142098> accessed 23 May 2019.
[20] Ibid.
[21] HASC Hate Crime Report, p. 21.
Former UN Special Rapporteur on Minority Issues at United Nations
4 年Thank you for your message. I will ask my team to contact you in relation to our upcoming activities on hate speech, social media and minorities. Could you please provide the best email address to contact you? Regards, Fernand