Can AI cause Tech Trauma? – Part One
Meenakshi (Meena) Das
CEO at NamasteData.org | Advancing Human-Centric Data & AI Equity
Welcome to data uncollected, a newsletter designed to enable nonprofits to listen, think, reflect, and talk about data we missed and are yet to collect. In this newsletter, we will talk about everything the raw data is capable of – from simple strategies of building equity into research+analytics processes to how we can make a better community through purpose-driven analysis.
?
I have found myself more angry, more untrusting of the world lately.
Sure, the ongoing wars, elections, hurricanes, inflation…, and general injustices against humanity… have an effect, but this seems deeper.
The deepfakes using AI that lead to riots, widespread false news, dangerously polarized journalism, tech continuing to disregard public safety in digital living, inauthenticity in favor of capitalism, and somehow new models of Gen-AI taking over the world everyday…yes that, I think all that is what's making me feel this nonstop tiredness, mistrust, anger.
So, today, I want to talk to you about something I still haven't figured out the language yet.
I want to talk to you about something called tech trauma.
Over the years, technology has transformed nearly every aspect of our lives, from how we communicate and work to how we socialize, shop, learn, and earn. Artificial Intelligence (AI) has amplified this shift, offering unprecedented efficiency, personalization, and innovation capabilities. Note that AI flavors existed before Gen-AI tools like ChatGPT. But as technology grows more sophisticated, I wonder if you, and many of us are experiencing a hidden, growing crisis called tech trauma?
This subtle form of distress arises from mistrust, inauthenticity, and a deepening unease about how AI and other technologies affect our lives.
Let us explore what tech trauma is.
I express this "tech trauma" as the emotional and psychological strain caused by the unstructured (and sometimes unethical) ways of black-boxed AI use, causing the mistrust and uncertainty that often accompany them. Unlike traditional trauma, which stems from discrete, often physical, experiences, tech trauma is more insidious and can accumulate over time. It manifests in feelings of anxiety, overwhelm, disconnection, and a loss of control in a world increasingly dominated by algorithms and machine-driven systems.
At its core, tech trauma results from the way technology—say, AI—creates a sense of inauthenticity in our relationships, decision-making, and even our understanding of ourselves. This strain can come in many forms, such as:
The problem is, if we leave this unaddressed, tech trauma can contribute to a long-term emotional harm. For some, this tech trauma is subtle—manifesting as a low-grade anxiety or reluctance to engage with new technologies. For others, it can lead to a complete withdrawal from tech-driven environments or a mistrust of institutions that adopt these tools.
Let's look a bit more deeply at why this is happening.
One of the most pressing issues fueling tech trauma is the growing mistrust around AI and related technologies. Many AI systems operate behind a veil of complexity that the non-data scientist-like person cannot necessarily penetrate. When an algorithm decides what news stories you see, whether you qualify for a service, or what ads target you, the decision-making process is often invisible. People are left wondering: How did this decision happen? Why was I excluded? Or why was I included here?
This lack of transparency, compounded by reports of AI bias, can cause distrust. Additionally, even well-meaning AI systems can perpetuate harmful stereotypes and reinforce social inequalities when their algorithms rely on flawed or biased data. For example, facial recognition software has been shown to misidentify people of color at higher rates, leading to concerns about racial bias in access or denial of services.
Naturally, one can feel powerless in the face of technologies they don't understand, which seem to have an outsized impact on their lives.
Another critical component of tech trauma is the feeling of inauthenticity brought on by our interactions with AI. In recent years, AI has become deeply embedded in everyday life, from social media algorithms curating our news feeds to virtual assistants like Siri and Alexa anticipating our needs. While these tools offer convenience, they often come at the cost of real human interactions.
Take online social platforms, for instance. AI algorithms decide what content we see, who we engage with, and how our online communities are shaped. This hyper-personalization can create an echo chamber where we are constantly fed information that aligns with our beliefs, reinforcing biases and limiting exposure to diverse perspectives. Over time, these filtered experiences, without intentionality, risk eroding our sense of authenticity as we lose touch with the broader, likely more complex realities of the world.
Recommendation engines suggest what to watch, read, and buy, often leaving us feeling like passive participants in our own lives. As AI takes on more roles traditionally filled by humans, from customer service to caregiving, we may question whether our interactions are real or artificial.
This is dangerous - this growing sense of inauthenticity can leave people feeling alienated, disconnected, and emotionally adrift in a world increasingly mediated by algorithms. The more our interactions are filtered through AI, the harder it becomes to distinguish between genuine human connection and automated responses, leading to a sense of emotional dissonance.
The question is – can we try to heal from this oncoming tech trauma?
With all the optimism I know, I believe we can. We just have to.
But let us explore that in the next edition.
***********************
***?So, what do I want from you today (my readers)?
Helping Non Profits achieve OPERATIONAL EXCELLENCE| Personal Mantra: Little steps make a difference #movetheballforward
4 个月Saving this post to read with a good cup of coffee. This article will require my undivided attention. Thx
Helping Communities Create Meaningful Engagement | CEO @ Community Hives/The Nonprofit Hive, Chief Engagement Officer @ Ember2Action
4 个月“At its core, tech trauma results from the way technology—say, AI—creates a sense of inauthenticity in our relationships, decision-making, and even our understanding of ourselves.” It’s so interesting Meena that at the same time we were still all in some state of recovery from Covid (or at least wishing it was time to recover from Covid) AI came on the scene. We were at the depths of our disconnect and looking deeply for a way to come back to community and connection after hiding out in our houses to protect each other. Grappling with what it means to trust technology at a time when we are so disconnected from each other, seems like a lot of psychological burden to put on humanity. Amazing newsletter as usual, my friend.
Co-Founder of Community-Centric Fundraising / Public Speaker / Fundraising Consultant, Facilitator and Coach / Host of The Ethical Rainmaker podcast
4 个月I love you so much for leading the global conversation around all the aspects of AI we should be wary of. Thank you Meenakshi (Meena) Das for sharing with us ??????