10 learnings on the safe use of AI

10 learnings on the safe use of AI

VUX World hosted a panel on ethics during the?European Chatbot Summit in Edinburgh 2023.

Kane Simms hosted the guest speakers?Oksana Dambrauskaite, E-commerce Operations Leader at Decathlon UK, and?Somnath Biswas, Head of Product – Conversations, Totaljobs Group.

So what came up in that discussion? Here’s 10 hot takes!

1 – Nobody got fired, yet

It’s no secret that AI and humans have different skills. We know that the best way to use conversational AI is to focus it on those tasks?which your human team do repetitively and can be automated. This allows your human team to focus on those things that humans do well such as making deductions from a small dataset, empathising and serving customers with complex needs.

It’s all about finding the right balance to get the best of both.

As Oksana said,?“(AI) allows humans to do something different, with more added value. Something more interesting. Our employees now actually like it. That was one of the biggest concerns that we had.”

---

Tickets for?#Unparsed, the world's first?#conversationdesignconference, are on sale now!

No alt text provided for this image

It's taking place in London this July 24th and 25th, brought to you by?VUX World?and?labworks.io?and featuring some of the most renowned experts in the?#conversationalai?space.

Visit the?Unparsed website?for more information.

See you there?

---

2 – Watch out for biases with Generative AI

In amongst the swarms of posts on social media about ChatGPT there’s been people pointing out the ways that this new tool has inherent biases.?For example, when asked to attach an emoji to job titles it suggested white males for all the C-suite roles, and white females for those working in HR and marketing.

Here’s the dilemma: while you may save time and cost with generative AI you’re potentially making the user experience worse or damaging your brand. You could be offending users. Your brand could appear like it’s stuck in the past.?There’s risks attached to it.

As Oksana says,?“there’s a bias that is already built in the model, in terms of the data that exists.”

Those Large Language Models (LLMs) such as ChatGPT are only as good as the data they contain. The data within them is historical – how can we have conversations with AI that reflect modern attitudes when it only expresses crowdsourced views from history?

3 – Be aware of the laws

Two regulatory changes were mentioned during the panel – the?European Union’s AI Act, and a new ruling in?New York that affects algorithmic recruitment.

What can you take from this? We need to be accountable for our use of AI. We’re going to need to be able to audit our activities so that, when scrutinised, we can explain?what?happened and?why.

What’s the issue? Well, we now have evidence that AI is getting very skilful at generating a response. It’s far less capable at saying?why?it said whatever it said. When the brand is ultimately responsible for whatever they say to customers (whether it’s a human-human conversation or human-bot) then they need to be able to rewind through a conversation to see why certain things were said.

Which leads us to number 4…

4 – Explore ‘chain of thought’ prompts

One way that you can attempt to understand why AI came to certain conclusions is to use chain of thought models.

Consider this,?GPT3 had 175 billion parameters. GPT4 has 100 trillion parameters. How convinced are we that even the creators of those models know what’s happening inside them?

When we’re relying on these models for advice they must be absolutely confident about what they say, especially in high consequence use cases. When issues arise, we need to be able to drill down on the issue and understand where things went wrong so we can fix them.

So with chain of thought models, instead of asking for the final result straightaway, you want to see the reasoning as the model progresses from initial request to final result.

No alt text provided for this image

5 – Don’t be creepy

It’s possible to track a user’s progress on a website so that, if summoned, the bot knows what to talk about before the user even asks. That can reduce the user’s effort so their experience feels better, but for some it may feel creepy.

Oksana mentioned that Decathlon have a feature where live agents take control of a user’s screen, to help them with any issue they have on the Decathlon website. As Oksana says,?“people used to freak out so much! Even though we were asking them ‘can we take control of your screen because you’re struggling to enter the postcode’, for example. Oh, it was creating so much panic!”

According to Somnath,?“there’s a fine balance between being a stalker and personalising it.”?People want a better experience but not to feel they’re being spied upon. We have the means to improve experiences – to help people do what they want with as little friction as possible. If our methods feel creepy then we’ve failed.

Decathlon learned fast, and changed their approach. As Oksana says,?“we are really careful in how we explain to customers what we can actually do. So in this kind of scenario, we would very carefully introduce the topic to them. So first, we would ask, ‘we see you’re struggling – do you need any help?’, they would say, ‘yes, of course we need help.’ And then, ‘if you don’t mind, we will take control of your screen, we cannot access any of your devices, it is all done through connection virtually. So it is absolutely safe, your data is safe.’ She added, ‘I think it’s important that agents are able to explain it in a very user friendly way, human to human, so customers understand that you are there.”

6 – Be secure

Every single party that processes customer data must be compliant with the rules, otherwise you’re compromising the security of customer data. And that’s that.

Decathlon will drop suppliers if they’re not up to the mark, and you probably should too. Standards must be very high when it comes to customer data.

7 – Don’t share customer data

According to Somnath, keeping customer data secure starts with ensuring that any third-party involved won’t train their model on it. Then, you can anonymise Personally Identifiable Information (PII) so that customer data is hidden from anyone who handles it.

8 – Have control over your technology

This does present a challenge though – we occasionally need to use customer data in conversations, for example when authenticating users.

LLMs could potentially be utilised there – for example if it was trained on a few user IDs it could generate a multitude of fake ones. But could this be done safely?

According to Oksana,?“It’s a tricky question, because it’s very new technology. And obviously, it has a lot of capacity, but to use it to the full capacity, the same as everything, it needs data. The question is where the data is going afterwards.”

9 – Use enterprise-grade infrastructure

One very interesting point that came up is that ChatGPT is available?both from OpenAI?(the creators of the model)?and Microsoft?(one of OpenAI’s?primary investors).

What this means is that while there may be uncertainty about where OpenAI stores data, Microsoft Azure stores data within the EU domain. That should make it GDPR compliant.

And also, to put it simply, many brands already trust Microsoft. According to Somnath,?“with the contracts you have with Microsoft, there are guardrails put in place in terms of data privacy.”

10 – Keep your finger on the pulse

Can you believe it’s only been 8 months since ChatGPT was released to the public? Since November 2022 it’s cropped up everywhere. Everyone seems to be using it. People who said ‘I’d never trust a voice assistant because they’re always listening’ seemed to suddenly forget their inhibitions when they got chatty with ChatGPT.?Interviews are being faked. Even?governments have started banning it?because it’s so unclear what threat this new tech represents to people’s privacy and data.

This relationship is going to evolve. Every interaction we have with LLM-based applications is building or eroding trust.

According to Oksana, that day will come when we trust this new technology. The same happened with Google Pay and Apple Pay;?“It is pretty much clear for everyone that they can trust Google, they know that they can trust Apple. The same will surely happen at some point with technology like GPT3, GPT4 and GPT 25 probably in the future. It just takes a bit of time and it will become more structured. I really believe in that.”

The conversational AI industry is a major element in that relationship – we’re helping people to form their early relationships with technologies such as LLMs. There’s a responsibility on our shoulders to ensure it’s done safely, to the benefit of all.

Thanks to?Oksana?and?Somnath?for sharing their thoughts?during the ethics panel!

---

This post was written by?Benjamin McCulloch, Conversation Designer and expert in audio production at VUX World. He has a decade of experience crafting natural sounding dialogue: recording, editing and directing voice talent in the studio. Some of his work includes dialogue editing for Philips’ ‘Breathless Choir’ series of commercials, a Cannes Pharma Grand-Prix winner; leading teams in localizing voices for Fortune 100 clients like Microsoft, as well as sound design and music composition for video games and film.

---

About Kane Simms

Kane Simms is the front door to the world of AI-powered customer experience, helping business leaders and teams understand why voice, conversational AI and NLP technologies are revolutionising customer experience and business transformation.

He's a Harvard Business Review-published thought-leader, a top?'voice AI influencer'?(Voicebot and SoundHound), who helps executives formulate the future of customer experience strategies, and guides teams in designing, building and implementing revolutionary products and services built on emerging AI and NLP technologies.


Chris Bro

Customer Success at Lately. The only social media management platform that creates content FOR you with the power of A.I.

1 年

It's good to remember that AI is a co-pilot not auto-pilot. Add that human touch to everything AI does. Thank you Kane Simms

回复
Satish Murthy

Founder & CEO at Fonor | Leadership Coach

1 年

Nobody got fired, yet? Well, I guess we're all just playing a corporate version of 'Survivor' with jobs.

回复
CHESTER SWANSON SR.

Realtor Associate @ Next Trend Realty LLC | HAR REALTOR, IRS Tax Preparer

1 年

Thanks for Sharing.

要查看或添加评论,请登录

Kane Simms的更多文章

社区洞察

其他会员也浏览了