Polarization: Finding Solutions
In the digital era, we prefer to avoid having uncomfortable real-life conversations. Let it be conversations to improve our family relationships, or to bridge across the divides of politics and religion, we usually prefer to avoid face-to-face encounters, especially when we disagree strongly.
Social Media, in particular, is contributing to hindering our capacities for in-person communication, for achieving mutual understanding, and building trust.
For example, it increases polarization. "The business models are specifically designed to keep us addicted and aggressive, as well as polarizing and misinforming us," says Tristan Harris, former Design Ethicist at Google.
Like Harris, a long list of platform developers now disassociated from Big Tech, have started warning on the dangers of leaving online platforms unregulated.
Platforms like Facebook, Instagram, Twitter, and Tik-Tok are part of the Attention Economy, and earn money depending on how much time users spend in them.
Algorithms decide what appears in our social media feeds, and carefully place each 'post' we come across by closely following our digital footprint. They predict what will capture our attention, and make us remain "hooked" online with the objective of generating revenue for Big Tech platforms.
Each 'post' we find online is carefully placed to keep us "hooked" by either:
A) Showing us viral content, usually short videos, to keep us comically entertained and/or distracted.
B) Positively appeal to our 'convictions' through Confirmation Bias, keeping us within our own isolated and curated versions of reality (Filter Bubbles).
C) Negatively appeal to us, by targeting our inner fears, angers, and resentments to trigger impulsive reactions and keep us engaged online.
By tracing data, Algorithms know each user's views on certain topics, and will deliberately show us a 'post' that will maintain us in our Filter Bubbles. As a consequence, we'll be less likely to empathise with other users, much less to hear 'their side' of the argument, because we are reading, listening, and seeing completely different and personalized information sources.
Could a solution lie in being exposed to opposite views online? Not at all.
Usually, when we see a post we strongly disagree with, we 'impulse-reply' to it. We make the mistake to react taken over by our most visceral emotions, and , consequently, what started as a 'comment' becomes a direct aggression.
Although this has become the norm, social media is not the right place to discharge our emotions. Instead, the best place and moment to channel personal frustrations is under the presence and aid of professional psychologists.
Big Tech designers target our psychological vulnerabilities with impunity.
Like unethical personalized propaganda, online hate speech often goes unpunished. Not only because polarizing engagement generates more revenue, but because there's no legislation to regulate online discourse and punish abuse (let it be gender-based, ethnical, political or religiously motivated abuses).
What can be done about this?
Everyone is vulnerable to opinion manipulation and disinformation. This is why government, private actors, and civil society must partner to:
1) Create and implement, by law, Digital Media Literacy courses and workshops at schools, universities, and workplaces. These should be mandatory theoretical trainings and practical laboratories on how to identify and evaluate sources.
2) Create and implement new legislations to regulate social media's business model, following, for instance, the European Commission Disinformation Code.
3) Replicate existing Deliberative Democracy methods to address disinformation, manipulation, alienation, and polarization locally and nationally.
3) Create and implement, by law, 'Civic Spaces' for citizens to congregate, get educated, deliberate, find common ground, and re-build trust and social capital.
4) Test and evaluate initiatives like Citizenship Schools. If their workshops are effective, they can be mixed with deliberative methods and be replicated locally.
Remember: Big Tech doesn't care for any psychological, economic, or social damage caused to citizens, democratic institutions, and other private enterprises, as long as they're earning their fair share and securing their monopoly. Therefore, scathing measures must be taken against them by all the stakeholders affected.
Bibliography
1)"Digital Political Manipulation, Representative Democracy in Crisis, and Democratic Theory, " Juan Diego Correa. Honours Thesis (2023) (Unpublished).
Videos
1) Tristan Harris. TED Talk: "How a handful of companies control billions of minds every day."(2017).
// Warning: the following video presents an unethical and criminal practice. //
It features Alexander Nix, former CEO of Cambridge Analytica, a data firm charged and condemned for interfering in the U.S 2016 Elections, Brexit, and over 68 elections worldwide by psychologically manipulating voters through Facebook.
领英推荐
Books
Articles
8)Sandra C. Matz, Michal Kosinski, G. Nave, and David Stillwell. “Psychological Targeting as an Effective Approach to Digital Mass Persuasion,” PNAS (114) no.48? (2017):12714-12719.
12) “Code of Practice on Disinformation,” Shaping Europe's Digital Future, European Commission, 2022.
16)The Insolvency Service. ”7-Year Disqualification For Cambridge Analytica Boss”. Gov.Uk. Sep 24, 2020.
19) Carole Cadwalladr. “Fresh Cambridge Analytica leak ‘shows global manipulation is out of control’. Jan 4, 2020. The Guardian. Jan 4, 2020.
Hearings
Communications and Arts Student
10 个月Awesome! Thank you for all of the resources as well ???