Industry Leaders Share Their Vision for AI

Industry Leaders Share Their Vision for AI

During Unparsed 2024 we asked 10 CAI professionals to tell us what they’re working on. They come from a range of companies, from startups with a single person covering their CAI projects to international banks, retailers and healthcare companies with complex and vast teams.

We asked them what’s working and what isn’t. We wanted to find the lay of the land – to understand what’s really going on beyond the hype and headlines. There were many great insights that we want to share with you.

Read on to see what’s what, what’s hot and what’s not in CAI right now!


This webinar features industry experts from Everest and Kore.ai, who will discuss how Generative AI can transform enterprises. They will cover practical use cases, the transition from pilot phase to full implementation, and how to achieve measurable ROI with Gen AI projects.

Find out more


The usage of CAI is extremely broad

This is what we hoped to see! The conversations assistants are having with each brand’s customers are diverse. Every organisation has different needs, they all have different customers who, most importantly, have unique needs too. It seems that companies have zeroed in on the importance of answering customer needs rather than prioritising the business’ needs first.

Some of the companies we spoke to are still getting started. We saw they had their eyes wide open about the need to define what their solution was going to solve before they built anything. This is something we recommend all businesses do, so it was great to see it happening!

We were particularly impressed at Laya Healthcare’s mature approach. They started with internal use cases; call summarisation in the contact centre, and a knowledge base to train new agents quickly (which is evolving into a copilot for all agents). These use cases are a great place to start as they’re low risk, yet are undoubtedly a boon to the organisation. What was so smart about Laya Healthcare’s approach is that they’re strategically enlarging their data lake, building their infrastructure, and training their team so that they have these resources up their sleeves when they eventually do create a customer-facing solution. It’s a fantastic approach.

We heard about a lot of different AI assistants too: voicebots to help with electricity metre readings, chatbots to answer FAQs and reduce the load on contact centre agents, and more. Each of these has different needs to fulfil and each person we spoke to knew that they had to create a bespoke solution to meet those needs. Bravo!

The challenges teams face are even broader!

Oh boy! When we asked about challenges, we got a variety of responses!

While everyone was aware of creating a solution that solves customer needs, the practice of discovering those needs was considered a challenge. What do people really need, rather than what they say they need? Chris Miles, Group Product Lead – Chat, Bot & AI, Lebara summed up the discovery process best – “the more you put in, the more you get out.”

Teams are finding challenges in localisation, customer authentication on voice channels, overcoming “bot amnesia” (the limitation of assistants to remember context between conversations), and giving products the time and attention they need when they’re in tiny teams.

There were some gripes about platforms – each has its own unique limitations that are hard to unearth before you’ve committed to one platform and started to build with it. As Elle Jefferys from Three UK said, “you don’t know enough until you start using a platform. And you don’t quite know what limitations you are going to face. I would suggest researching what different platforms can offer. Get the free version and try it yourself, and figure out roughly what you would need, and think about how it will perform down the line in your roadmap.”

All teams have their needs for capturing, cataloguing, accessing and understanding data too. For Yell, this challenge is amplified. They need to ensure that business records are kept up to date constantly (while in the background, brand names change, businesses move or close, people change jobs, and so on). It’s a huge task, but those records must be kept up to date so that you and I can find a plumber to fix broken pipes at 3am.

Crucially, AI knowledge among stakeholders was frequently mentioned as a blocker. Specifically, there’s a lack of understanding among teams about what can actually be done with AI, beyond the hype. Ben Wylie, Global Digital & Global IVR Programme Owner, Worldpay summed it up succinctly; “they often have too much caution about AI, or they see it as a silver bullet. The reality is somewhere in the middle.”

The ChatGPT effect

You could say a lot of the excitement around AI began with ChatGPT. The launch of this software has caused widespread cultural change. Interviewees noticed the tone in their organisation towards AI taking a dramatic change after that moment, as many people started to see the value in generative AI. Businesses have noticed that conversations with machines can exponentially enhance their work – if they use it well.

That’s the thing. Many of the people we spoke to said that stakeholders aren’t very confident how to use it well yet. When we asked about the level of AI interest at their organisation, it’s unsurprising that they all reacted positively (they were attending a CAI conference, after all).

However, the level of interest ranged from one extreme to the other. There were those who said their company was just testing the waters. There were some who said that while there was interest, their hardest challenge is pitching new AI projects to their team-mates. There were some that said they need to ground the AI dreams of their team-mate in reality. A few said that AI was in the core of the business, and frequently used.

Analyse this

Every interviewee saw the value of analytics with automated conversations, as it allowed them to understand where customers were failing and find improvements.

The extent of each company’s analysis varied though. Issues were mentioned. One company switched off live chat when their agents didn’t have the capacity to respond, which meant that they weren’t collecting sufficient data to analyse whether their support was actually working.

Most focused on CSAT, and some augmented that data with extra insights gleaned from transcript reviews, or checking whether customers contacted again after interacting with their CAI solution (which would suggest that AI hadn’t resolved their need), or whether the user actually read the response a chatbot gave, or where they clicked on the website.

It was great to see how Lebara deals with analytics. They recently rolled out their mobile plans in 5 different locales. Chris focuses on one locale at a time, working with his in-country teams to try and push the needle there. By focusing on each locale in turn, Chris has oversight into what’s working in different countries and can apply these learnings across the board if they’re relevant. It’s a solid strategy for improving performance without trying to raise the tide in all countries at once, which could be like trying to herd cats in a thunderstorm! It’s important to not rush to act on your analytics. Take the time to understand before you act.

Chris’ approach is grounded, and should lead to genuine improvements. Again, this highlights the different approaches being taken by each brand, and how they’re finding their own ways to solve them.

The ground shifting under our feet

As for where they planned to go next, a few companies were still at the strategy stage, seeking to define their long-term vision before they built solutions.

LLMs were frequently mentioned among most interviewees. But before we mention language models, it’s worth pointing out that some were confident they didn’t need to drop their old system and replace it with a generative assistant. As Fabio Sarti, Sr. AI/ML Product Manager, Scotiabank pointed out, they already have an NLU-based system that is working and stable. Why replace it when it isn’t broken? He said they saw the value of LLMs so would look to find genuinely useful applications for them to augment the successful NLU-based assistant they already have.

With LLMs, there was a lot of interest in RAG applications. Many of the interviewees expected that they would continue to research and develop LLM-based solutions, but they were cautious about issues such as hallucinations. Kudos to Katja Laptieva, Voice-UI & Conversational Design Expert, E.ON Germany, who said she looks forward to a day when LLMs become ‘boringly truthful and predictable’. We’ve had a lot of LLM hype. We’re excited because they can be applied usefully in various ways (such as intent recognition or response generation). The thing is that we practitioners need resources which give us consistency and accuracy all the time. Inconsistencies and unexpected detours should be a design feature that we can choose to turn on or off as desired, rather than an unfortunate fly in the ointment.

All of them are continuing to develop their solutions, with some looking to scale up to more channels.

Nuggets of brilliance

We spent hours chatting with these gifted practitioners from our field, and were blown away by their focus and creative spark. We’re in a challenging space. They’ve shown the wealth of strategies we can apply though. You don’t always need to invent on your feet – there are tried and tested techniques. Here’s some more of the great insights we heard during our chats.

Giorgos Tserdanelis, User Experience Design Lead, JPMorgan Chase & Co was keen to point out the reason to keep old solutions alive if they still provide value. While we could have a more engaging and dynamic conversation with an assistant based on an NLU or LLM, if you have an IVR system that works, it can be a worthwhile channel to keep online alongside your AI assistant. If enough customers still want to use it then you should keep it.

Philip Corcoran, AI Solution Architect, Laya Healthcare had a great idea how to show stakeholders the functionality of CAI. His team built a simple webpage where teammates could enter an utterance and see how their bespoke LLM matched it with an intent. This can help in many ways; to show the team how their system works, prove the stability, and even to help with debugging if issues are found. There’s no reason why this approach can’t be applied to an NLU-based assistant too, for a rough-and-ready window into the assistant’s inner workings.

Ben Wylie, Global Digital & Global IVR Programme Owner, Worldpay was inspired by some of the talks on linguistics at Unparsed. He loved the insights into how ‘unsaid words also have meaning’ in conversations.

We’ll leave the last word to Natalie Foo, Experience Designer, John Lewis, because she had us in stitches when she said, “I’ve learned that you can make LLMs less crazy!” It’s a brutally frank realisation gleaned from the epic 2024 edition of Unparsed. LLMs that are less crazy may be the one thing our industry has been hoping for most – we’re glad the fantastic speakers, sponsors and attendees at Unparsed are showing how we can get there!

Special thanks to;

Natalie Foo, John Lewis

Ben Wylie, Worldpay

Philip Corcoran, Laya Healthcare

Giorgos Tserdanelis, JPMorgan Chase & Co

Aaron Brace, Yell

Katja Laptieva, E.ON Germany

Elle Jefferys, Three UK

Fabio Sarti, Scotiabank

Matthew Lee, Vivo Barefoot

Chris Miles, Lebara

Martyna Kosciukiewicz

Language Nerd | Conversational AI & NLP | Mental Health ????

2 周

Thanks for the insights Kane. I wonder what were the thoughts on the return on investment, especially for LLMs?

Philip Corcoran

AI Architect at Laya Healthcare (AXA)

3 周

Well done Benjamin McCulloch and Robert (Rob) Young ??????? on a very comprehensive and funny piece!

Benjamin McCulloch

Human in the Loop: Conversational AI + Writing + Sound

3 周

It was great to chat with so many trailblazers, and don't forget our Robert (Rob) Young ??????? was there too ??

要查看或添加评论,请登录

社区洞察

其他会员也浏览了