Throwing sand in the gears of misinformation amplifiers: the India experience
Tina D Purnat
Social, Commercial and Information Determinants of Health | Digital Public Health } Healthy Information Environment
You may have heard about recent announcements from Google and Meta about new misinformation and fact-checking coalitions and projects. It’s not uncommon in the private sector, particularly in the technology industry, to announce voluntary standards to stave off future government attempts to regulate their business that may bite into profits.
We have seen this in industries as varied as self-driving cars, health-related applications that use AI, and gig economy driving and delivery apps and home rentals. The technology industry has done the same for misinformation for many years, and it is now accelerating in countries that don’t have the same stringent policy and regulatory environments where they are incorporated, such as in Europe, Canada, Australia and the US.
Misinformation can cause harm, but the majority of the planet’s population lives in countries where they are not protected online by strict regulation. This leads to a Wild West approach where individuals are subject to the whims of the technology platforms they use. They are also disproportionately affected by policy changes that platforms make [see my recent blog on this].
LMICs are disproportionately impacted by the digitally propagated misinformation (often originating from high-income countries). Just in health, since the mid 2010s we have seen massive failures of immunization campaigns in the Philippines, and Pakistan, among others, specifically linked to the circulation of misinformation on social media. In addition, since early 2010s, reports of disruptions in manipulations of elections and political discourse have been reported in countries in Latin America and Africa, just to name a few.
How regulators and platforms have responded
Although governments have struggled to respond to these infodemics, sometimes extremely blunt approaches were used as responses, such as ordering takedowns of misinformation or persecuting people who spread what was labeled as misinformation (inadvertently or otherwise). What’s clear is that no government or one stakeholder alone can address misinformation effectively, especially in complex domains that operate with scientific knowledge, such as health and environment. Technology companies have taken note.
Recently, Meta, Google and other tech companies created a Misinformation Combat Alliance in India. Indian fact-checking organizations are also part of the announced partnership. The alliance’s website is short on details, apart from an ambition to reach every internet user in India and make them digitally literate in order to make them pause before sharing misinformation.
This industry-led-misinformation alliance packages research, literacy and fact-checking initiatives that have already been on the table and focuses them on a market that is of significant size and importance to them. I think these actions are necessary, but not sufficient to put a dent in the challenges of the information ecosystem, and we need to be careful in what kind of responsibility we put on the platforms in the discourse about stemming the harm from circulating misinformation.
Fact checking in focus
This is also coming at a time when the Indian government is considering changing legislation to regulate digital information in news that is deemed to be false (see this article from Jan 2023).?The Indian government has proposed to set up a government-led factcheck unit that would set this standard of “what is true fact” that the platforms and other stakeholders would need to uphold to, and this is already being challenged by the political opposition as a potential to misuse the legislation for government-led censorship.?
Fact-checking claims (as you can see by the code of principles by the International Fact-Checking Network) requires independence and a transparent process, and one would not expect the activity to be driven by the same organization that is an originator of information in the information environment.?
领英推荐
In general, there’s often an assumption that fact-checks are sufficient to counter misinformation online, but if fact checks are provided by governments or organizations that are not trusted in that particular content, it defeats the purpose and value of the fact check.
And regulation should not focus on content alone and arbitrating what is true, but also consider how user interactions and information flows are designed in online settings that can accelerate or stymie the spread of misinformation. However, all regulations must be done carefully to avoid unintended consequences. UNESCO has often warned in its reports and guidelines that such legislation often leads to chilling freedom of expression.
Putting regulation of internet platforms into context
The complexity of how to deal with misinformation has played out through a tense discussion between the government and the internet platforms. For example, in 2021 WhatApp?sued the Indian government in a bid to challenge new IT rules that ask messaging apps to trace the “first originator” of a message.
India had already in 2018 experienced major challenges in measles vaccination campaigns and physical violence against government census workers because of circulating misinformation on (now Meta-owned) WhatsApp. At the time, this actually led WhatsApp to change the features in its app to allow forwarding of messages on WhatsApp to only 5 contacts at a time (what we know of in UX as “adding friction”) based on pressure from the Indian government (read this excellent piece in Wired on this topic).
This in turn had huge global implications on slowing the velocity of the spread of messages on the platform globally and although since then, techniques of information manipulators on the platforms have evolved, this UX decision is an example of a positive change that can come out of a discussion between a government and a platform.
It’s also an example of how regulatory action or actions by an internet platform in one country can have global implications for freedom of expression or information spread across the world.
(Imagine what would have happened if WhatsApp still allowed people to share a piece of (mis)information with hundreds of contracts with a touch of a button—how would it have played out during COVID-19?)
We need a better global discussion of the matter
We would find better solutions if we brought together the essential ingredients of fact-checking, digital literacy, and regulation of platforms to a global level and at the same time considered how ethical design, user experience and transparency play out in this context.
Bioprocessing of beta &alpha chitin, chitosan various degrees.Worked in Goverment ,Infodemic Manager.
1 年Tina D Purnat I read it couple of times.Hats off.
Communications Consultant, formerly at Pan American Health Organization / World Health Organization
1 年This is complex and can be counterintuitive. “if fact checks are provided by governments or organizations that are not trusted in that particular content, it defeats the purpose and value of the fact check.”
Responsable Process et Satisfaction Praticiens chez Thia Santé Mentale | Building Global Health Connect | MPP Global Health Sciences Po, Paris
1 年Fact checks by governments can also be used as a form of censorship in many cases as we have seen in the Indian context sometimes where critical information against the government is suppressed in the name of a fact-check. Definitely a tricky slippery slope.
Health Policy at Rockefeller Foundation
1 年?? plus the consideration of how information both online and offline combine to inform someone’s decisionmaking