AI: Cultural and Psychological Impacts to the Human Experience
Jason Grimm, USN ??
Cloud and Edge/IoT Strategic Technical Advisor & Servant Leader in Architecture, Sales, Business Development, Partner Management and Marketing
Question
If modern humans live four times longer than our ancestors and are drastically more comfortable, connected and entertained than we have ever been in our 300,000 years on the planet, then why are we experiencing an epidemic of anxiety, depression and other mental health challenges in this 21st century?
Summary
This article is focused on the impact that technology has on the human experience, with a close look at the latest transformative innovation, Artificial Intelligence, having the most recent and dramatic impact on that experience.
Evidence
I understand that correlation does not equal causation. Just because the rising arc of the adoption of AI, particularly associated with gaming, social media and mobile device usage, closely matches the rising arc of mental health challenges, it does not necessarily prove that the two corresponding charts have a direct relationship.
Or does it? My initial assumption was that it was going to be difficult to draw this line from correlation to causation, but it did not take much time to find plenty of public studies from serious and reputable research organizations, e.g., the World Health Organization (WHO) and the National Institute of Mental Health (NIMH) , that are already doing exactly this. The WHO even goes as far as to define conditions, such as “Gaming Disorder”, and have had them added to the ICD (International Classification of Diseases).
When I started doing research for this article, I was surprised that there is already a wealth of well documented data on the correlation between mental health challenges and the effects of mobile device, social media and gaming use already available. I’m not breaking new ground here; I’m only curating the content, drawing some conclusions and making the data a little more digestible.
Much of the data I found would seem to indicate that 18–25-year-old females are the most affected, at-risk portion of the population.
I don't know what to make of, or do about that, except to note that it exists and it's not a minor deviation.
Regardless of the research and data, my own or by the scientists, what is a well-known and widely accepted fact, is that a) human isolation and depression/anxiety do have a causal relationship and b) increased mobile device, social media and gaming usage (effectively always bolstered by AI) provides for (encourages or in some cases requires) human isolation.
It is no surprise then that the data also indicates that lonely and isolated people are more prone to excessive use of (are addicted to) mobile devices, social media and gaming services. Ironically, and sadly, lonely and isolated humans tend to leverage these technologies to feel more connected, engaged and validated, but the more they use this technology, the less connected to other humans (in the analog, visceral sense of the word) they feel. It's a predictable and vicious cycle.
The massive social media ($290B) and gaming ($221B) industries highly leverage (depend on) the use of AI technology. It can become an addiction in the technical and neurochemical sense of the word. “Addiction” sounds like a dramatic description of the condition, but dopamine – a neurotransmitter (a catecholamine drug) that stimulates the “reward center” in your brain – is what is activated and released into your bloodstream when you use your device, social media and when you play games.
Dopamine is released into your system when you accomplish a task and from an ancient, biological perspective it encouraged / rewarded us with a sense of accomplishment when we ate food, built a shelter, hunted for food, escaped from or conquered danger, etc. These rewards (good feelings of accomplishment) continuously encouraged us to repeat the behaviors that were beneficial to the primary goal of keeping us alive. That’s a good system (paraphrasing one of my mentors, Simon Sinek ).
Credit: Why Leaders Eat Last
In the paleolithic era, however, you only occasionally hunted, fought, fled from or conquered danger, built shelters, etc. After the event you were rewarded with dopamine, to encourage you to do the activity again when it was necessary to stay alive, but in the meantime, you went into a period of much needed rest and recuperation. In modern times, via mobile device usage, social media and gaming, we can have 100s of hits of dopamine per hour, all day and all night if we like (which we often do), all by ourselves, with no human interaction, no literal story or risk and all from the comfort of our own homes.
By the way, guess what else releases dopamine into your system – cocaine, heroin, sex, gambling, shopping and, yes, getting public recognition (likes) and completing tasks (games). Dopamine is highly, highly addictive and should be treated as such.
Conclusion
There is a straightforward line of reasoning to follow here:
Industries like gaming and social media heavily leverage technologies, like AI, to capitalize on this basic human need. It's why the term "gamification" is such a common phrase; It's a known and effective method of keeping us engaged. The real conundrum with all of this, though, is that the primitive mechanisms that rewarded us for completing tasks required actual physical work, collaboration, sacrifice, a journey, existential risk, no small amount of suffering / danger and often required the close, cooperative and physical engagement of the community.
In modern times, we have “gamed” these mechanisms for profit and the fleeting (and addictive) quick hits of dopamine derived from temporary and virtual accomplishments have left us longing for meaning due to the tangible lack of story, struggle, sacrifice and human connection. These feelings of isolation and disconnection from our communities are significant contributors to the rising epidemic of anxiety, depression and mental health challenges.
There are / have been other compounding contributors as well, e.g., COVID-19 and geo-political, economic, etc. stressors that amplify this problem. This article does not cover these tangents, but they are worth mentioning them just the same as they add fuel to the fire.
Side Note
I couldn’t resist adding this out of place, sad and morbidly ridiculous anecdote that I found in my research for this article. It speaks to how ill-prepared we humans are for understanding and addressing the challenges of how these technologies are impacting our mental health as a society.
In 2023, the National Eating Disorders Association (NEDA) discontinued their AI chatbot helpline program because it was giving bad advice to people seeking help with their mental health condition. The chatbot was encouraging those with eating disorders that sought assistance through the website to eat less and be more focused on their weight, which evidence shows only adds to the challenges associated with eating disorders. If there was ever a time where human connection is critical, it’s when we are seeking help for mental health challenges that we are struggling with. Adding an AI chatbot to the crisis hotline, not surprisingly, had the opposite of intended effects.
Let’s put the fire out by adding gasoline said nobody ever.
The Antidote
What I have come to understand through my own life, and more specifically through the role that technology has played in that journey, is that a meaningful life is not defined by the absence of suffering or the abundance of pleasure or leisure. A meaningful life is derived from and experienced through the healthy and growth-oriented conquering and integration of said suffering.
Note that I do not say a “happy” life. Happiness is treasured, but fleeting, while meaning and connection is what sustains you. Meaning gives you a sense of purpose, grounding and a connection to yourself, to your community, to your people, to your story and, as far as I can tell / in my personal experience, is the healthiest, most predictable and consistent antidote to anxiety and depression.
Being at the top of your World of Warcraft league or getting likes on your Meta or Instagram posts will give you brief hits of dopamine, but in the grand scheme of your life’s journey, it offers zero sustainable meaning.
Carl Jung encourages us to integrate the shadow, Joseph Campbell advocates for us to go on the “Hero’s Journey”, the ancient Greek philosophers implore us to “know thyself” and eastern philosophies and religions encourage (require) us to recognize, accept and value suffering. To belabor the point, and for a more modern reference, consider that nearly all our modern, popular media (books, movies, plays, shows, etc.) are just updated retellings of the hero’s journey with plenty of focus on steps four through seven of those journeys, i.e., the trials, tribulations and suffering.
Credit: https://www.thesap.org.uk/articles-on-jungian-psychology-2/about-analysis-and-therapy/the-shadow/
I am not advocating for suffering. Like most people, I am not a fan of suffering, and I take every opportunity I can to reduce my own suffering as well as the suffering of not just the loved ones around me, but anyone I encounter.
I’m merely pointing out that it appears that a meaningful life and suffering seem to be inextricably linked.
What does this have to do with AI specifically and technological advancements in general? Here lies the rub, we humans have a call to adventure ingrained within us. We have a deep yearning to be tested, to be triumphant in that quest and to come through the other side of that suffering with a renewed resolve, a new set of skills and an emboldened confidence earned by these trials (stories). These journeys are critical to our sense of self, our interpretation of the world and our personal experience of what we often consider a meaningful life.
Yes, we share good news and good stories with each other too, but it’s the hardships that define and inspire us.
If you want a tangible example of the antithesis of this, why do we scoff at the social media stories of trust fund kids bemoaning publicly about how difficult their life is or people being triggered by negative comments in their social media posts. I’m not criticizing these people; they are faced with the same meaning-seeking dilemma as the rest of us, I am merely pointing out that we, as the complex and social beings that we are, have a deep desire and respect for what we perceive as authentic struggle and triumph.
In an era of rapid and life changing technological innovation, where we are striving to make life as easy, convenient and pleasurable as possible, we should be very cognizant that these luxuries often come at a cost to not just our own sense of self and purpose, but to our mental health and development as well.
My personal definition of a meaningful life is that everything you do matters. Does it matter how many likes this article gets? No, it doesn’t. Does it matter that I’ve had my own hero’s journey and suffering that I’ve found meaning and purpose from? Sure, yes, to me personally it does. Does it matter that my story and this information may help even a single person in their own journey to navigate and better understand that AI and technology in general are tools that can help us be more comfortable and entertained, but to not expect to find meaning, happiness or to stave off anxiety, depression and other mental health challenges through these conveniences? Yes, it does.
Science Fiction is now Science Reality: A Brief, But Necessary Bit of History
In 1968, Stanley Kubrick wrote and directed “2001: A Space Odyssey” and, also in 1968, Philip K. Dick wrote “Do Androids Dream of Electric Sheep”, which went on to become Ridley Scott’s movie, “Blade Runner”. Both creative works examine the role and potential conundrums of AI. It took ~60 years, but science fiction is now science reality; we have artificial intelligence, and we have flying cars. Kubrick and Dick were visionaries bordering on clairvoyant when they cautioned society with these tales of the dangers of AI in a time when the world ran on mainframes and there was no PC, no Internet, no mobile phones and no AI.
What humans could only dream of 60 years ago is now a manifest reality.
60 years is a tiny blip on the human evolutionary scale (.0002% of time that humans have been on the planet to be exact).
The recent, exponential advances in AI technology, and its impact on human life and society at large, is on par with the discovery of fire and the shifting from singular/nomadic living to tribal life and farming.
Fire, tribal life and farming, however, also led to population density which laid the path to war, famine and disease. The risks and benefits associated with dramatic advances in technology, culture and society are historically a double-edged sword and AI is not unique in this sense.
领英推荐
Moore’s Law
Moore’s Law, published in 1965 by Intel’s co-founder, Gordon Moore, put forth the empirical theory that technology will see a doubling of capacity/capabilities or a halving of cost every two years. This theory has held true over the last 60 years and has only recently seen signs of slowing down due to the finite limits of integrated circuits. However, this limitation is quickly being overcome through the introduction of GPUs, FET / 3D / nanosheet transistors and other technology advancements in the last 10 years, which promises to meet or likely exceed Moore’s Law. In short, there’s no reason to assume that Moore’s Law will cease and there is extensive innovation and evidence to predict that it will actually accelerate.
In short, if you’re intrigued, impressed or concerned about the capabilities of AI today, don’t worry, it’s going to be twice as powerful and/or half of the cost (driving additional adoption) in two years and every subsequent two years thereafter.
For the uninitiated: What, How and Why is AI?
What Is AI?
There are a couple definitions of AI that I think do a passable job of summarizing what AI is:
NASA’s Definition
“Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets … that solves tasks requiring “human-like” perception, cognition, planning, learning, communication, or physical action”
Wikipedia’s Definition
“… intelligence exhibited by machines … that develops and studies methods … that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals.”
How Does AI Work?
Ok, so that’s “what” AI is. How AI works requires a couple of white papers to delve into sufficiently, but I’ll simplify it with an example here.
You may have heard of industry terms like “data science” and “predictive analytics”. These practices have been around for 50+ years. Consider a classic and well understood practical predictive analytics application still in use today – forecasting the weather.
Let’s say that you have 20 years of historical weather data loaded into a database. This data has all the relevant attributes that we collectively refer to as “weather” included, e.g., humidity, wind speed, temperature, precipitation, elevation, location, etc. By evaluating what we know (historical data) and the relationships between contributing factors that affect the observable past (humidity, temperature, etc.) we can make predictions about what the weather will be under similar, or variations of similar, conditions in the present or in the future. This is how we predict, forecast and “model” the weather.
Data is still the raw material that we use to “train” models with AI.?The models themselves, however, are the algorithms that are used to learn patterns, make predictions (infer) and it is through this processing and analysis that the meaningful insights are extracted. The term “Insights” is vague, but assume things like – is that a ball or a child in the road, is that Spanish or English that is being spoken or written, what is the question and what is the appropriate answer.
Side note – guess what you’re doing when you try to access or login to a site or service and you get those “identify all of the buses in these photos” captcha pop ups? That’s right, you got it in one – you’re a human training AI models for free.
Captcha, also known as "Completely Automated Public Turing Test" has been around since the early 2000s. "Turing" may ring a bell as well. Alan Turing was the inventor of the modern "computer" during World War II to crack Germany's Enigma cipher and the "Turing Test" was popularized in the first blade runner movie to test and catch rogue AI masquerading as humans. Captcha was originally developed to thwart web bots, denial of service attacks and spamming, but is now leveraged to train AI models at scale to the tune of 200 million Captcha puzzles solved per day.
Why Does Anybody Care?
It's underwhelming to say, “That AI is just a really good prediction engine”, but in essence that’s true. Here is what has changed though, and this why I mentioned “Moore’s Law”. The dramatic and exponential increases in compute, networking, storage, etc. capabilities along with the increasing sophistication of the software (algorithms) to query and analyze larger and more complex datasets has yielded a level of human mimicry (predictions) that is so thorough and advanced as to become, in some cases, (nearly, and arguably / actually) indistinguishable from human interaction and experience.
This bears repeating – “AI is (going to be) nearly indistinguishable from the human experience”. From everything that I’ve seen in my technology career, the exponential rate at which this science (and the advancements in infrastructure required to support this science), is developing and along with the astounding and unprecedented commercial investments being made (powered and compelled by the return on those investments, e.g., increased profit, innovation of new products, reduction of human labor costs, etc.), we are in a five year period where we will soon be able to drop that word “nearly” when we compare AI to actual human emulation, and yes, maybe even “connection”.
What Does This Mean for The Human Experience?
I am the youngest of four children with an eight-year gap to my siblings, so I had an experience not unlike that of an only child. When I was 12 years old, I moved from the city to the country into a large, exclusive and secluded community which had, literally, one other child in it that lived miles away, whom I rarely interacted with. I found technology at a young age, and it was my constant companion, so I understand some things about relying on technology as a source of engagement, connection, development and distraction.
As an adult, I have had a large family for a long time, and my children are close in age. I have watched them grow up over the years and it has been (almost always) a delight to watch the constant interaction and negotiation that is necessary between siblings. The play time, contention, sharing, resolution, mentoring, etc. that is required to create a reasonably stable and nurturing environment has resulted in (mostly) whole and well-adjusted humans. Watching my 13-year-old son help his nine-year-old sister practice riding her bike, and comfort her when she is frustrated or falls, is a joy to me and I imagine to them as well.
I have friends who are parents to single children. I’m not advocating for large families or criticizing small families. Large families come with their own challenges and society is trending towards small families for practical, cultural, social, financial, and sometimes biological reasons. There are certainly benefits to single child homes and families in terms of attention, resources and opportunities. Statistically, though, there are challenges too.
Only children are 42% more likely to have anxiety and 38% more likely to have depression compared to children with siblings.
I only mention it here as an example and that singular children in a home are not unlike singular adults, of which there are an increasing amount of on average, so the comparison is relevant.
I spend a lot of time with these families, and I often hear the child say “Alexa, tell me a joke” or “Alexa, tell me a story”. I think it was these observations that initially interested me in this topic.
You see, AI is subservient to our requests (so far). If you ask AI to tell you a story or a joke, it will. If you ask AI to drive you to a restaurant or to the movies, it will. There’s no contention, no debate, no negotiation on what to watch, what to play, where or what or when to eat and little give and take.
Also, you often have these experiences by yourself. In these cases, the user (the human) has their needs met – a song or a movie is played, a joke is heard, the restaurant or movie is driven to – the task is complete, but where is the human experience? Where is the memory? Where is the human bond, experience and relationship that is challenged, created or strengthened?
It’s in the vulnerable and authentic contention that our confidence and communication skills are built and required. It is in that space and those experiences that we are tempered by our peers, that our skill of negotiation is honed and often hard won, that our empathy, authenticity and human connection is founded.
If you want two more quick examples, consider how many of us modern humans eat and date. An exponentially increasing amount of our groceries and prepared food is delivered to us (via mobile apps) and an even more exponentially increasing amount of our search for romantic partners is done through mobile apps. Going to grocery stores and to restaurants are seemingly trivial events, however, they do provide some amount of socialization and human connection. This experience and connection is lost when an anonymous human delivers your groceries and meals to your door, often with no interaction. With the dating apps, we can swipe through 100s of introductions and invitations in an hour with effectively zero risk or consequence of rejection as opposed to the “olden times” when we had to actually own that moment of in person vulnerability (fear / courage) associated with asking someone out on a date.
I'm not a luddite and I'm not shunning the conveniences that technology provides. I'm merely pointing out the law of unintended consequences and that sometimes convenience comes at a cost. I'm encouraging us to be aware of these tradeoffs and to be intentional about developing methods and habits to navigate this new world that we now life in.
Back to the hero’s journey and that meaning in life is often derived from where and when we are most uncomfortable, vulnerable, afraid and challenged. I really do love, and I really am very excited by technology, AI included, and the current and future potential for it to enhance our lives and the human experience, but we need to be very careful about the temptation to substitute actual human learning, connection and experience for technology-based simulacra.
My personal and observed experience is that, so far at least, there is no substitution for the difficult journey or of having to own that moment of vulnerability in a negotiation. The lessons, personal development, connection, confidence and relationships that are formed from these experiences are critical and will become the bedrock of your sense of self, your sense of belonging and your measure of meaning.
Leverage technology to make your life more comfortable or convenient, sure, but don’t forget to go out into the world and contend with it because it is that contention, connection and experience that matters; it is those experiences (even / particularly the painful ones) that add meaning and value to your life and your story.
Computer Science & Artificial Intelligence
6 天前Your article on the cultural impacts of AI sounds intriguing. Can you share some specific cultural shifts that AI is influencing?
S2Arts Lab, Licensed Psilocybin Facilitator, Saba Cooperative DAO, and Myco-Method
1 周In developing an AI-powered app ecosystem, I’ve realized that many cultural and psychological risks—bias, manipulation, and over-reliance—can be addressed by designing AI to respond uniquely based on things like user’s cultural identity, belief system, and even genetic expression and biofeedback. Personalization helps mitigate bias and anthropomorphism, while AI should also encourage real-world connections rather than replacing them. Transparency in AI’s decision-making, user-adjustable response parameters, and AI literacy programs can further safeguard against misuse. Limiting AI’s emotional simulation may also help manage expectations and prevent dependency. Open to thoughts on additional safeguards!
Strategic Program Director | Change Leader Driving Technical Excellence, Diversity, and AI Innovation | ERG Leader
3 周What a great read Jason Grimm. I appreciate your summary of the research. I have read so much about the coorelation between technology use and anxiety/ depression. When my three kids were young I made a conscious effort to limit their screen time. And they knew not to even ask on a school night to play video games. However, there is a six year age gap between my youngest and oldest. As my older children got more access it was harder for me to limit my youngest (I also got tired not gonna lie!). So he has absolutely had more access and it shows. We discuss it often together. I will ask him how he feels when he is on a screen for hours! When I ask him it gets him to feel what’s happening and usually he will decide to take a break. However, without a loving nudge I don’t think he would be as aware. I am excited and nervous about where we are headed!
Microsoft Solutions Architect: Modern Workplace SME, Data Governance, Microsoft 365 Copilot
3 周Lots to think about here, Jason - thanks. I just started "How To Read Now" by Elaine Castillo, which serves up essays on how to "read the world" in the era of AI. Here is a podcast quote where she was addressing "brain rot": "The thing that scares me the most around AI and how we use it is that we ourselves are prepared to abdicate our own humanity and our capacity to read, to contemplate, to wrestle with difficult concepts, to imagine our lives and the lives of others, to read the terms and conditions of our own lives, essentially, because it is work to think critically, to practice media literacy, and to do the kinds of things that ultimately build a soul." I think that's what you mean by "growth-oriented conquering and integration of suffering".
Global Security Investigations Leader at Dell | Cyber Fraud Prevention Expert
3 周Well done and informative article!