How should social media respond to misinformation?
Credit for image: Harvard Business Review

How should social media respond to misinformation?

As a society, we have a complicated relationship with social media.

When we hear about online bullying, misinformation, and hate speech, we rightly condemn the ugly shadow it casts on society.

But when?Russia bans Facebook?in an attempt to control the narrative around Ukraine, it feels as if a fundamental human right is being taken away.

Our confusion is due to a conflict of values: freedom of expression vs freedom from harm.

And it is one of the most important debates of our time. But it is also a debate that has become overly simplified, polarised and tribal.

What are the two sides of this debate?

One side is the freedom of speech absolutists who want no rules. They have fuelled the rise of new social media platforms, such as Gab and Donald Trump’s “Truth Social”, which have no limits on what you post.

This has quickly descended into white nationalism, anti-semitism, and QAnon conspiracy theories.

The other side is looking for greater state regulation of social media. This has resulted in new initiatives such?as Google Fact Check,?Good Information Inc?and?Trusted News Summit, which seek to verify what is true.

This side often underestimates the risks of granting a state’s powers to police the internet. Examples of how this can go wrong are?social media crackdowns by political regimes?looking to control the free flow of information.

So can the conflict between freedom of expression and freedom from harm be resolved? Let

Facebook under fire

Nearly a?quarter?of the world’s population is on Facebook, making it the most popular?social media?platform in the world.?

But the credibility of their stated mission, to “build community and bring the world closer together” has recently come under fire.

Whistleblowers?have accused Facebook of knowingly choosing profit over society; they argue that, in promoting misinformation, Facebook is diminishing the integrity of the democratic process.

It has also been argued that social media misinformation has weakened society’s ability to respond effectively to the pandemic.?

Mark Zuckerberg responded with the following:

“The idea that we allow misinformation to fester on our platform, or that we somehow benefit from this content, is wrong.”


Facebook’s defense is they?have more fact-checking partners?than any other tech platform. Their efforts are certainly impressive, removing over 5 million pieces of hateful content every month.

Facebook will also point you towards their?community standards, which lists 4 types of content they may take action on:

  1. Violent and criminal behaviour
  2. Safety concerns (such as bullying, child abuse, exploitation)
  3. Objectionable content, such as porn and hate speech
  4. Inauthentic content (such as misinformation)

Of these, there is little controversy over points 1 to 3. There is broad agreement that Facebook should act to prevent harm from violence, bullying and hate speech.

It is point 4, misinformation, which is controversial.

The rise of misinformation

It sometimes feels like we are primed to simultaneously believe everything and nothing.

And this feeling is not new.

In 1835, the New York Sun?newspaper claimed?there was an alien civilisation on the moon. It sold so well that it established the New York Sun as a leading publication.

However, targeted misinformation as a form of psychological control has only increased with the rise of digital communication.

Google Ngram?viewer, which measures the frequency with which words appear in books, shows a steep increase in our usage of the word “misinformation”.

No alt text provided for this image

To understand this, we should understand how misinformation spreads.

Facebooks algorithms are optimised for engagement – this often promotes content that elicits a strong emotive response.

What is important to the algorithm is the engagement, not the veracity of the information being shared.

Interestingly,?misinformation does not travel far when there are few echo chambers.?Content will travel as far as a user who disagrees with it, who subsequently fact checks it and reveals it as misinformation.

However, online groups are increasingly segregated by ideological beliefs. In these bubbles, misinformation can travel far through the network without being challenged.

Below is a?visual representation?of the echo chambers from the UK’s EU referendum debate:

No alt text provided for this image

Confirmation bias?is our tendency to seek out and believe information that aligns with our existing views. It is known by psychologists that once someone’s mind is made up,?it is difficult to change it.

Misinformation, therefore, aims to exploit algorithms, echo chambers and confirmation bias to?amplify and strengthen existing views.

This can?lead to extremism.

This is particularly problematic when it spills out into the real world.

Examples of the real-world impact of misinformation:

  1. The 2021 Capitol attacks, with roots in the QAnon conspiracy theory
  2. Disrupted elections in countries including Ghana, Nigeria, Tongo, Tunisia and Holland
  3. Lack of uptake in Covid-19 vaccines linked to a variety of misinformation sources
  4. The deaths at Charlottesville, fuelled by right-wing extremist forums
  5. ‘Pizzagate’ shootings, also with roots in QAnon

Furthermore, misinformation has been found to?undermine our trust?in institutions and spread false science, which impacts public health and disrupts the democratic process.

So, what action should be taken?

To finish reading the full article, head over to our blog below ????

https://www.hallaminternet.com/how-should-social-media-respond-to-misinformation/


要查看或添加评论,请登录

Jake Third的更多文章

社区洞察

其他会员也浏览了