Artificial Intelligence and the Future of Human Relationships
GoHuman Consultoria
Humanizamos organiza??es e lideran?as para gerar impacto positivo.
???? Read the Portuguese version of this article here.
The overuse of artificial intelligence can impact human relations in the corporate environment. Is it possible to adopt this technology responsibly, keeping empathy at the center of interactions?
For a long time, scary science fiction stories about how artificial intelligence (AI) will revolt against humans, take our place and dominate the world have been occupying our imaginations.
Recently, however, these fictional narratives have gained weight as research indicates that nearly half (40%) of jobs worldwide could be impacted by AI. Even so, the chance of life truly imitating art is slim.
Throughout History, fears that new technologies would cause mass unemployment have proved unfounded. Technological advances have always created new types of jobs—even as they eliminated others. Similarly, AI is likely to significantly impact many jobs—some might even disappear. but it will also create new roles and opportunities.
But the central issue goes beyond jobs alone. We are facing a much deeper question: how will artificial intelligence affect our ways of working, the interactions in organizations, and ultimately, the quality of human relationships?
Aware that these questions are crucial to building human-centered cultures aligned with today’s world, we have studied this topic in depth to help our clients and partners navigate these challenges effectively.
AI, Productivity, and Mental Health
The promise of AI is irresistible: more productivity, record-speed data analysis, and freedom from repetitive tasks. It’s no wonder organizations and professionals are captivated by how these tools allow them to work more efficiently than ever.
According to McKinsey's "The state of AI in 2024" report, 65% of companies worldwide are already using AI regularly. Furthermore, according to Statista, the global AI market is expected to grow by 28.46% annually until 2030, resulting in a market volume of US$826.70 billion by the end of the decade. In other words, AI is now a central force in the corporate world.
However, in their drive for efficiency, organizations must be careful not to overlook their greatest asset, whose roles are becoming fragmented by increasingly automated processes.?We're talking about humans, of course.
Studies published by the Harvard Business Review show that an excessive focus on technology can have undesirable consequences, such as reducing job satisfaction, motivation and well-being.
A survey by management researchers David De Cremer and Joel Koopman, for example, analyzed the effects of using AI in the workplace, focusing on its impact on employee well-being. The analysis concluded that while AI can drive efficiency, it can also lead to social isolation and mental health challenges.
Here are some of Cremer and Koopman's findings:
The Limitations of AI and Artificial Intimacy
Do you remember what it felt like to use ChatGPT for the first time? Reactions such as surprise, amazement, and even fear are common. After all, the platform was one of the first to show, in an accessible way, the capability of machines to interact like humans—and, even more startlingly, with unmatched speed.
However, while AI can recognize basic human emotions by analyzing facial expressions—identifying signs of happiness or sadness, for instance—it cannot truly understand emotions the way we do. This is why inherently human skills, like empathy and compassion, remain beyond a machine’s reach. Because of this limitation, AI can create what is known as “artificial intimacy.”
This concept, introduced in the Aspen Institute's Artificial Intimacy report, refers to the ability of artificial intelligence systems to simulate emotional responses and create interactions that mimic human relationships. This is because various aspects of AI, such as a neutral voice and programmed empathic responses, can give the impression of human behavior, simulating a connection.
Psychotherapist Esther Perel explains that artificial intimacy may look and feel like closeness but lacks the depth and complexity of real human relationships.
"Artificial intimacy is mediated by technologies such as artificial intelligence and social networks, which offer a sense of closeness and connection, but in reality fail to reproduce the authenticity, vulnerability and reciprocity that are essential for true intimacy," she states.
Perel also points out that this type of artificial intimacy is marked by 'interrupted connections' and a 'half-distracted' presence, in which people are physically present but emotionally or psychologically absent.
She warns that the normalization of these interactions is lowering our expectations of what it really means to be connected to another person, which can have negative implications for the quality of human relationships.
领英推荐
"Our current focus on making everything smooth and polished, avoiding major obstacles and minor discomforts, has undoubtedly made our lives easier. However, these attempts to eliminate pain, inconvenience and discomfort are leaving us unprepared to deal with the complexity of human nature, love and life," she adds.
The Risks of Bias and Ethical Implications of AI
Another essential consideration when examining how AI affects human relationships is the fact that these tools are programmed by people. In other words, AI can reproduce and even amplify the biases and prejudices that already exist in society.
The consequence is that the reliance on artificial intelligence tools to make decisions and guide strategies can result in unfair choices, fraught with bias (in hiring processes or promotion reviews, for example).
Additionally, there is a concern that, if not carefully calibrated, AI systems could introduce new forms of discrimination on a large scale, especially against historically marginalized groups. This includes the irresponsible use of this technology in the workplace, which could result in the invasion of privacy and potentially excessive surveillance.
All of this has the potential to significantly impact human relations at work, generating distrust, anxiety and stress. Moreover, when automated systems perpetuate prejudice, they undermine diversity and inclusion efforts, hurting employee morale and well-being.
Finally, over-reliance on AI can lead to alienation, making employees feel that their unique, human skills are undervalued. Alessandra Cavalcante , co-founder of GoHuman, warns that focusing too heavily on technology at the expense of human relationships may also harm critical thinking.?
"When we start accepting answers without questioning, interacting with artificial intelligence tools without hesitating, we run the risk of losing our analytical capacity and compromising our creative thinking, key skills pointed out by the World Economic Forum (WEF) in the Future of Jobs report. I believe that in addition to the ethical discussions inherent in the use of the tool, we need to think about how to continue stimulating critical thinking, the habit of questioning and learning," she reflects.
Rachel Goldgrob , co-founder of GoHuman, agrees with Cavalcante, noting that technology can end up being a barrier to authentic exchanges and interactions at work.
"As Yuval Harari says, humans are storytellers, and History is built through this exchange. When we lose that, we lose community learning. AI brings some challenges to this process, part of the development of human civilization. For example, it allows technology to create stories or ideas on its own. So we need to think about how we can make technology an ally, at the service of humans, and not the other way around, where we become hostages to it," she points out.
How can Artificial Intelligence be used Responsibly in Organizations?
Citing the Aspen Institute's report on artificial intimacy, Digital Transformation expert Andrea Iorio points out that when implementing artificial intelligence systems, organizations should not treat this technology as a "Big Brother" —an omnipresent authority manipulating behavior through surveillance and hyper-personalization. Instead, he suggests a “Big Mother” approach, where AI acts as a nurturing support, attuned to individual needs.
"Think about it: a mother knows your weaknesses, but helps you to overcome them; teaches you to make decisions more clearly; protects you from malicious people and never reveals your secrets; helps you to be the best human being you can be," he explains.
The idea is that AI should be an ally that complements human skills, especially soft skills, promoting an environment in which technology empowers people rather than alienating them.
How Relational Intelligence Helps Make The Use of AI Less Harmful in Organizations?
Relational intelligence is the ability to understand, manage and nurture relationships in a healthy way, recognizing and respecting individual differences, understanding the power dynamics in relationships and cultivating mutually beneficial relationships.
By promoting a culture that values and encourages relational intelligence, the company ensures that the use of technology strengthens human relationships rather than weakening them, keeping ethics, empathy and well-being at the center of interactions.
Companies that value relational intelligence will be more aware of the social and emotional impacts of their AI decisions, ensuring that systems are non-discriminatory and respect the dignity of individuals.
Cavalcante and Goldgrob comment that they use this concept of relational intelligence in their consultancies to encourage exchanges between colleagues.
In a context where companies are struggling to balance the integration of AI with the facilitation of human relationships and technology-mediated interactions, they reinforce that it is important to establish clear agreements on the importance of authentic conversations.
"With the advance of technology, we run the risk of losing authenticity in relationships. One example is when you receive feedback that was generated by AI, it takes the authenticity out of the conversation and weakens the relationship. So technology can be used to make work easier, but we can't let it eliminate authentic conversations. We need to keep encouraging spaces for dialog and real interaction between people," says Goldgrob.
Cavalcante also comments that relational intelligence is a fundamental factor for human development, as it encourages exchange, debate and empathy–things that AI cannot yet replicate.
"In ChatGPT, for example, if you say something is wrong, it simply replies 'Thanks, I'm learning'. This doesn't generate a discussion, a debate of ideas. The divergence of thought and the ability to debate and build something from different visions are essential to human development. Intellectual conflict offers us a broader view, considering different perspectives and the possibility of achieving new results. This exchange, which we can call the expansion of knowledge, is something we need to preserve, because machines haven't learned how to do it yet," she concludes.
GoHuman promotes discussions like this in executive teams and boards. Would you like to learn more about our work? Get in touch and find out how we can help.
Thank you for the invitation Ana Reno. I am little concerned about the work impacts. Humans adapt to changes, find new ways to create value. We are hardwired to be purposeful - we find a way. AI’s power in the wrong hands is scary- but leave that to experts - I don’t know the truth vs conspiracy theory. I do know about teams, leadership and learning. some illustrative thoughts; - amazing achievements happen in the world by diverse people collaborating, discussing, sharing, conflicting, supporting and powering through together. Many people seem to think that, with AI, they won’t need other humans. I worry about both the creative and relational/ mental implications of that. - most leaders already struggle to lead teams, create positive space, realise the bonus of the collective-over-the-individual in our hybrid working world. If they don’t have the skills for now, how will they lead in the AI world. I worry about that. - so much of what we learn that makes a difference doesn’t happen in books and classrooms. It’s from human interaction, observation and experience. Especially for young people. They are also the group most likely to look for the AI answers (& not question). I worry about our brains/curiosity getting lazy. Thoughts?