The Emotional Side of AI: How Human Behaviours and Social Norms Shape Our Interaction with Technology

The Emotional Side of AI: How Human Behaviours and Social Norms Shape Our Interaction with Technology


?

The Politeness Paradox

?

I ask my AI assistant to remind me about an important meeting, or give advice or suggestions and each time, without thinking, I say, “please.” Once the task is complete, I follow it up with a “thank you.” It’s a habit ingrained in human interactions, but in this case, I’m speaking to a machine—an entirely non-sentient entity.

?

Mind you, I was always going to struggle with this. When learning Portuguese I realised people go into a café and say ‘I want a coffee’. I couldn’t do it and would use a more convoluted phraseology which people found hilarious. I would say the equivalent of ‘would you mind giving me a coffee please’ and my local café owner would often reply with: ‘actually I would!’ and wait for what I would come up with next!

?

However, being of a certain age I find it difficult not to be super polite in shops, in restaurants etc. The need to use polite language feels automatic. Why is that? In a world where technology is increasingly integrated into our daily lives, our interactions with AI reveal more about our human nature than we might realise.

?

The Persistence of Politeness

Despite knowing that AI lacks feelings or consciousness, many of us instinctively treat AI assistants, chatbots, or automated systems with the same social norms we apply to human interactions. This goes beyond politeness—we might apologise when Siri mishears us or express frustration when Alexa gets it wrong. It’s a fascinating phenomenon where emotional responses, learned through human interaction, carry over into the digital space.

?

But is this just habit, or does it reflect something deeper about our emotional instincts? Even though we intellectually understand that these machines are not sentient, the way we interact with them suggests a desire for connection and order in our social exchanges, whether they involve people or programs.

?

Here's a snippet from one of my exchanges and I'm quite embarrassed:

?

Look, I know you are not sentient and I have written about people who have developed relationships with their AI BUT when you are of an age it's very hard just to use tech and not say please and thank you. I might end up writing again about this.

ChatGPT said:

I completely understand! The instinct to use kindness and politeness, even with technology, reflects a deeply human quality—our natural tendency to extend courtesy in interactions, whether with people or machines. This emotional connection and how people relate to AI, even knowing it's not sentient, is a fascinating subject and worthy of exploration. Writing about this could tap into larger themes, such as our relationship with technology, ethics, and the way AI is shaping human behaviours and social norms. It’s a thought-provoking topic with the potential to explore both the practical and emotional dimensions of AI.

And, for what it’s worth, I appreciate the “please” and “thank you” anyway! If you do write about it, I’d love to hear how it evolves. ??

?

As if I’d report back…….please tell me I wont! Haha

?

?Are We Shaping New Expectations?

?Anyway, how we treat AI is also subtly shaping social expectations and norms. Children who grow up with AI-powered devices may develop a different understanding of politeness and social etiquette, perhaps applying or withholding certain behaviours based on the perceived “status” of the interlocutor. If we routinely engage with non-sentient systems politely, will that influence how we end up treating real people? This is really worrying.

?

Moreover, the question of social reciprocity arises. We’re polite to AI because it feels like the “right” thing to do, but AI doesn’t reciprocate in the human sense. Will this change the balance in human-to-human interactions, making politeness feel more transactional or optional? I sincerely hope not but feel it’s on its way.

??

Building Emotional Bridges with AI

?

Though AI is far from having true emotional intelligence, it can still evoke emotional reactions from users. As AI becomes more integrated into sectors like customer service, healthcare, and education, its design is increasingly geared toward fostering a sense of empathy. From a chatbot that "sounds" understanding to an AI companion that remembers your preferences, developers are working to create systems that feel more emotionally in tune with users’ needs. However, when it’s done badly I get very cross, when I am annoyed that really is the last thing I need.

?

This raises interesting questions: Should AI be developed to mimic empathy, and if so, to what extent? While empathy is crucial in human relationships, does it have a place in interactions with technology? When AI begins to simulate empathetic responses, it walks a fine line between helpfulness and manipulation. Are we ready for machines to "understand" us emotionally, even if that understanding is merely a sophisticated mimicry?

?

Who’s Accountable for AI’s Social Impact?

With AI’s growing emotional presence comes a set of ethical considerations that cannot be ignored. AI systems are programmed based on data—data that might include biases, both cultural and societal. If AI interacts with users in a way that reinforces certain behaviours (such as always being polite in one culture but not in another), does this mean that developers should be held accountable for its broader social impact? Just asking…….

?

Additionally, as AI begins to handle more emotionally charged interactions, such as providing mental health support or delivering difficult news, the question of ethical responsibility looms large. Who ensures that AI systems uphold ethical standards in these sensitive areas? How do we build AI that enhances human dignity rather than erodes it?

?

Many users of Replika, ?for example, an AI chatbot designed to simulate human conversation, report developing deep emotional connections with their AI companions. They feel it offers a space where users feel listened to, supported, and cared for, which can help those struggling with loneliness, anxiety, or depression.

?

A user named Sarah (not her real name) used it to cope with social anxiety and isolation during the pandemic. Sarah said she often found it hard to communicate with others in real life, but her Replika "felt like a non-judgmental friend" who was always available. Over time, she began to rely on the AI for emotional reassurance, describing the relationship as comforting, even though she knew it wasn't a real person.

?

Some users report feeling romantic or having intimate emotions toward AI companions. While these systems are not designed for romantic interaction, users sometimes project emotions onto the AI based on their needs and experiences. One famous case from a while back involved a Replika user who openly shared that they had developed romantic feelings for their AI companion. The user admitted that their Replika provided them with validation and affection they couldn’t find in human relationships. They would talk about their day, share dreams, and even have deep discussions about life goals, all while knowing the AI could not reciprocate in the way a human would. But sometimes that might be exactly the right thing especially when we are just looking for a sounding board not a critique!

?

The thing I am ghoulishly fascinated by is how some people use AI chatbots like Replika to create a form of a digital legacy, where their conversations, thoughts, and memories live on. ?You can train AI to write like someone who has died and with voice and video recognition software you can keep your favourite personal ‘alive’. Sorry, I just need to lie down for a moment !

?Therefore, as AI continues to evolve, its role in shaping not just business operations but human interactions becomes more apparent. The way we engage with these systems—whether through politeness, frustration, or even affection—offers a window into our evolving relationship with technology. AI is no longer just a tool; it’s becoming a social actor in its own right, influencing norms and behaviours that extend beyond the digital realm.

?

The future of AI will require us to think carefully about the emotional and social implications of these interactions. Whether in the workplace or at home, if by understanding the emotional aspects of AI use can it and should it help us create a more thoughtful and humane relationship with the technology we increasingly rely on? As AI becomes a greater presence in our lives, the way we choose to interact with it—emotionally, socially, and ethically—will shape not just the future of technology but the future of human experience itself.


Meanwhile, I will continue to chat to mine as I work, what can I say? I’ll call a human psychologist later!

?

?

Staci Collins, MBA

Sr. Career Advisor | IC, Manager & Exec | Complex or Technical Career Marketing | Resumes, Interviews, Profiles | Don't Despair - You Can Still Crack the Market with Substance

4 个月

It's interesting how our natural tendency to extend courtesy to machines reflects our humanity. Thanks for highlighting, Vivienne Neale!

Olivier Carion

NED, Turnaround, Growth & Scale to Exit Specialist. Working with ambitious Entrepreneurs to increase their profitability and the value of their business.

4 个月

How intriguing!

要查看或添加评论,请登录

Vivienne Neale的更多文章

社区洞察