Could A Chatbot Ever Be Your Friend?

Could A Chatbot Ever Be Your Friend?

Looking up at the sky through the dome of the Pantheon in Rome

When I was a student at McMaster-Syracuse MCM a few years ago, my thesis was focused on human/AI agent relationships and how they might affect communications and trust.

If you want, you can read a blog post summary of my research or download the full "My Best Friend is a Chatbot" capstone from the Institute for Public Relations.

I was thinking about that paper recently for a couple of reasons.

First, starting this weekend, I'm teaching a Digital Strategy and Emerging Trends course at the MCM's first in-person residency since 2019. Very exciting!

And second, because of all the hullabaloo around the possibility a Google conversational AI might be conscious and understand people and the world around it. (Spoiler alert: it's not.)

If you aren't up to speed on what happened, here's a quick recap.

Blake Lemoine, a Google engineer, was testing the company's LaMDA AI model and came to believe it was sentient. He tried to convince his bosses his claims were true. They disagreed with his assertion so he went public with a transcription of his chat with the AI on Medium and a feature story in the Washington Post talking about his experiences and why he believed LaMDA was a living entity ... er, machine. He was since put on administrative leave.

Lemoine's reasoning was vociferously shot down by numerous AI experts including Louis Rosenberg, Emily Bender and Gary Marcus. The gist of their critiques are that current deep learning algorithms rely on large language models (LLMs) to function. LLMs parse lots and lots and lots of text-based data including Wikipedia, websites, social media posts, digital books, research, etc.). That means they're trained on both good and harmful content.

So while LLMs are getting better at putting together words and phrases, they're really just super high performing pattern spotters. A narrow artificial intelligence that could be likened to a talented Vegas impressionist. Close your eyes and you can hear the person they're imitating. Open them and you see it's all a show.

Because Google's LaMDA was trained on both text and conversations, it's better than most AI models at giving answers people might expect. It's certainly a cut or 10 above most chatbots we've interacted with online that can barely respond and never remember any past encounters (data) we may have had with them.

But I have to admit when I read through the transcription Lemoine shared, it did sound pretty real. And it sent a shiver down my spine.

Are We That Gullible?

In a word, yes.

We're conditioned to believe in fictional scenarios, at least for short periods of time. Consider movies, TV, novels, plays ... The actors/characters pretend to be someone they're not and we get swept away by the story that's so vivid it often seems real.

So when an AI agent spews back the things we want to hear, we find ourselves in a situation where it's easy to be taken in. At least a little.

LaMDA shouldn't be considered conscious or general AI. It's closer to a magician's sleight of hand.

But while we don't need to be afraid of AI Terminators taking over just yet, we are at a fork in the road and need to be wary of a different twist.

That we'll be fooled into complacency by thinking an AI is something it's not. By always trusting it and what it says over humans.

By considering it your friend.

Back to My Chatbot BFF

In my master's research, I asked participants (computer scientists, researchers, journalists, digital communicators, entrepreneurs) to describe the ideal human AI relationship.

Without prompting, each of them mentioned the film Her, starring Joaquin Phoenix and Scarlett Johansson as the voice of the operating system. In the movie, Phoenix falls for the OS, who seems caring and lifelike.

One respondent said she believed we're already in a relationship with our smartphones and those devices could be the gateway to deeper human AI encounters and trust.

Shades of LaMDA, perhaps?

Fast Forward to the Metaverse

All this is to say while AI is far from perfect, it's getting a lot better at giving us a reasonable response. Sometimes.

And if you consider the metaverse, it's not hard to imagine working alongside AI agents that look like us (deep fake faces), sound like us (synthetic voices) and can speak logically (LaMDA and NLG) and appear to understand what we say (relational AI).

Maybe one of them will be our boss!

And we need to understand the relationship we will have with those agents, and remember that no matter how reliable, supportive and friendly they are, they're still artificial.

And they're capturing data from us constantly, learning and adapting to what we say. So the interactions we'll have with them will be far from balanced.

There are no easy answers to questions around how to best navigate in this world.

But the table stakes must begin with transparency. That is, that machines and the companies behind them disclose what they are, who they represent and the data they collect.

We also require better and easier to use privacy settings we can turn on and off depending on our preferences and needs, and protection from bias and harm.

We'll need to improve our media literacy skills to make sure we're not taken in by disinformation and lies spread at machine speed.

And we'll have to develop even more awareness about who or what is genuine and deserves our trust. That means putting our egos aside and being open to human criticism as opposed to unbridled AI support.

One way to do that is to pay attention to what's on the horizon and the issues they may bring and not get taken in by all the hype.

Looking to the Future

Future gazing is the topic of this week's Digital Marketing Trends video. I unpack some of the findings from the Future Today Institute's 15th annual Tech Trends Report.

From low-code or no-code AI where almost anyone can build a model or chatbot, to your work policies around avatars and how you represent yourself to machine-based sentiment analysis, there's a lot to think about the future of your workplace and customer interactions.

Check it out and let me know what you think.

Connect With Martin

Well, that's about all the time I have to chat in issue #57.

What do you think about human chatbot relationships? When would you trust a machine over a person (or would you)? Do you ever think you could consider an AI agent a 'friend'? Please share your thoughts in the comments below.

Be sure to reach out if you have questions about any of the videos in Digital Marketing Trends, or my other?LinkedIn Learning courses.

And if you want to find me, follow me on?LinkedIn?or?Twitter.

Or visit?my?my website?and send a message or a question.

And speaking of ideas, if you've got any you'd like me to cover in my next newsletter, be sure to send them my way.

Thank you again for reading and subscribing!

I wanted to let you know we're taking a short summer break and will be back on July 16.

We can continue the (human) conversation then!

Recommended reading is Nobel Prize winning author's latest work of genius, Kazuo Ishiguro's Klara and the Sun! Klara is a robot who is chosen to be an artificial friend to a sick girl. She develops feelings - but only positive ones - and empathetic behavior. Fascinating and eerie...

Mandla Mthombeni(MBA,PDBM)

Technology Lead @ WesBank | MVC, ASP.NET Razor

2 年

This is a great

Thanks for sharing another good article, Martin, If I change your question “Could a chatbot ever be your friend?” to Why can't chatbots think? the easiest answer is: Because they are not think bots. They are chat bots. We don’t really know what “thinking” is or at least how it arises from mechanical processes, and some even doubt IF it arises from mechanical processes. We don’t yet understand consciousness well enough to know whether bots will ever “think” in the way you mean “friend” Although maybe you meant “think” in a different sense. How could we possibly know that? Today most believes are they have the intelligence of a rather dull newt!

Sanjiv Pandey

Brand Marketing & Hospitality Consultant(ex McDonald's ex Subway). Education Coach cum guide

2 年

I find chat bots pretty handy & cuts on unnecessary conversation . Only irksome part is when they come across as attempting to be mysterious, coy, plain ridiculous. Happened few times. Makes one wonder if there any guidelines for developers?

Tracy Gardner

Adventure Guide at Maverik, Inc.

2 年

Actually more accurate 1,500 in the world 1 in USA But over 2 God bless you Ephesians Chapter Is one of my favorite chapters??

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了