Social Media Learning Crisis
We are currently experiencing a cultural shift in the US.?
The main narrative is shifting towards prioritizing the opinion of the majority over the insights of a knowledgeable few, regardless of the topic—from the trivial to the complex.
Overall faith in science and institutions is at stake. Facts are treated as opinions, and opinions as facts, with individuals claiming that one’s “truth†is just as valid as anyone else’s under the false guise of free speech, a concept that only privileges those with power, who face no real consequences for their words.
The latest announcements from Mark Zuckerberg on new moderation and fact-checking policies at Meta are a screaming example of this evolution. While these policies likely aim to avoid the scrutiny and animosity of the new administration, they do reflect the new mainstream discourse.
In my opinion, social media are currently at risk of profoundly damaging the fabric of learning for our youth. To understand why, we need to explore how these platforms are designed to amplify misinformation.
Amplification of the Dunning-Kruger effect on social media
Social media significantly amplifies the Dunning-Kruger effect, a cognitive bias where individuals overestimate their knowledge or abilities in a particular area, even though they are not especially knowledgeable or skilled in that area.
Algorithms on platforms like X and Facebook provide, for each topic, an overwhelming amount of tailored information. Suddenly, hundreds of seemingly confident people share similar ideas presented in bite-sized posts and catchy phrases. Users are encouraged to join in, spreading the message they read with force while remaining ignorant of the potential complexity and nuances beneath the surface.
When a post contradicts our newly formed opinions, confirmation bias strengthens our stance and shuts down opportunities for reflection or critical thinking. This is often reinforced by dismissive or peremptory comments denying any legitimacy to opposing views.
Before long, our attention shifts to another trending topic, and the following cycle repeats, fueled by algorithms that exploit human emotions and behaviors to keep users engaged:
- Discover a new hot topic – Driven by curiosity and FOMO.
- Get a superficial understanding – Supported by emotional involvement and cognitive biases.
- Confidently share your "knowledge" – Motivated by social recognition, belonging, and attention-seeking.
- Move to the next trending topic – Restart the cycle.
Let’s be clear, I fall into this trap all the time myself! Our brains aren’t wired to handle the relentless flow of contradictory information and the social pressure to have opinions on topics we barely understand. We feel compelled to take a side and seek validation from others.
The Dunning-Kruger effect has its critics, but whether the early overconfidence is genuine or strategic, giving every voice equal weight on social platforms inevitably makes it harder for true experts to stand out in a cacophony dominated by the loudest.
领英推è
How does it impact learning?
Young individuals are particularly vulnerable to getting trapped in this social media loop. As they build their identities and search for meaning and recognition, they have less control over impulses and emotions. They also have less control over their impulses and emotions. Algorithms amplify the skewed information they encounter, creating echo chambers reinforced by human and bot-generated content.
A report from YPulse reports that 63% of 13-17-year-olds agree that “social media is the best place to learn about a topic because there are so many different perspectives on it.†While this sounds empowering, it’s deeply concerning. Social media - and their owners - give the illusion of fostering healthy debates,? but in reality, these platforms are plagued by manipulative discourse, misleading rhetoric, and fabricated "facts," all amplified by algorithms designed to provoke strong emotional responses.
How often do you stumble upon a nuanced, balanced take on a hot topic in your feed?
This dissonance—between what young people encounter on social media and what they’re taught in schools—deepens distrust in educational institutions and undermines the work of teachers and educators. It’s exacerbated by figures like Elon Musk and Mark Zuckerberg promoting the idea of "trust the people, not the experts."
To receive this newsletter and Wide Walls podcast directly in your mailbox, subscribe on https://widewalls.substack.com/.
Could AI provide solutions?
Though generative AI may generate more doubt and confusion by enabling the creation of realistic fake content, I believe it also has the potential to mitigate some of the detrimental effects of social media.
First of all, generative AI tools are often adept at distinguishing facts or scientific consensus from opinions and unsubstantiated claims, and they can identify manipulative rhetoric.
Renato Russo , whom I interviewed in the first episode of Wide Walls Podcast, has centered his PhD research at ç¾Žå›½å“¥ä¼¦æ¯”äºšå¤§å¦ on combating misinformation online. His latest study, Twisted Knowledge Construction on X/Twitter: An Analysis of Constructivist Sensemaking on Social Media Leading to Political Radicalization, highlights the need for media literacy that aims to direct learners toward critical evaluation rather than confirmatory evidence on social media.
As part of his research, Renato Russo developed a browser tool that scans online pages for rhetorical fallacies, explaining to readers how specific language might mislead or distort. The goal isn’t to declare information as right or wrong but to empower learners to critically assess sources and arguments themselves.
We also see examples of safe social media platforms for children. Zigazoo , for instance, has become the world’s largest social network for kids by offering a safe yet exciting alternative. Similarly, tools like Scratch and DIY.org provide opportunities for children to interact in open, positive, and supportive environments.
At a broader level, stronger legislation and policies are needed to safeguard youth online. Organizations like everyone.AI and KidsAI advocate for children’s rights, guiding companies and institutions toward ethical AI practices. Governments must also invest in digital literacy programs—not just for young learners but for everyone. You can read my article Gen Z Is Not Tech Savvy for more on this topic.
Conclusion
More than ever, we need safe online spaces where young people can apply critical thinking and develop an informed understanding of the world around them, free from manipulation and bigotry.
I deeply admire those in the learning community who fight tirelessly to equip learners with tools to navigate the fog intentionally created to control them. Their efforts give me hope that we can raise a generation capable of tackling society’s challenges collaboratively—through research, reasoning, and respect for each other.
Children’s brand creative at ‘Tubba and Friends’ | Author | Illustrator | DJ | Igniting imagination and wonder through storytelling.
1 个月Very insightful and informative on various aspects. Thanks for sharing your thoughts on this.
Doctoral Student at Columbia University | Research Fellow at Transformative Learning Technologies Lab
1 个月Very interesting analysis, and thanks for the mention!