2 of the best approaches to AI automation for contact centres: Deep dive
Kane Simms
Conversational AI and CX Transformation ?? Strategic Consultancy ??? Podcast ?? Thought-leadership ?? Events
When beginning to consider how and where you can use AI within your contact centre, you’ll first need to define your strategy. Part of that will be figuring out which of the 8 top contact centre AI use cases to start with. Then, you need to decide whether you go ‘high and wide’ or ‘narrow and deep’ with your use cases. This article will define what those two tactics are, detail how to decide the right approach, and a step by step instruction on how to implement each approach.
It will cover:
What is a high and wide vs narrow and deep approach to AI automation?
A high and wide approach is where you have an AI solution that has a wide understanding of customer needs. It's the first thing your customer will interact with when they call or reach out on any channel. It understands the breadth of things that your business does, and can understand which of those things your customer needs, based on their initial or first few utterances. Think of this as the front door to your business.
The job of a high and wide AI assistant is to understand what your user needs, then route the user to the right place to have their issue resolved, such as a self-service journey, or to the right agent/skill set that can help them.
A high and wide AI assistant won’t automate transactions fully, it will sign-post and route users to the channels or people that can help.
Narrow and deep is where you have an AI solution that covers a specific use case from end-to-end. Think of this as transactional automation where your AI agent enables users to complete tasks and get things done. Checking the status of an order, changing their address, making a claim, freezing a bank card, booking an appointment, for example.
Over time, what you’ll find, and what you’ll aspire to, is a high and wide assistant that hands off to narrow and deep assistants. Think of this as a ‘master AI agent’ sitting on top of a series of task-based AI agents. We’ve written in the past about how to decide whether to use this master or multi-AI assistant approach.
Should you start with a high and wide or narrow and deep approach?
The best way to decide your approach should be to conduct a broader contact channel analysis. This will make sure that you’re covering all of your potential use cases and following our AI Design Principles by deciding the best approach based on data. Once you understand all of your use cases, you can prioritise the approach that will give you most value.
Let’s look at some examples.
Having this kind of data will help you make the right decision on where to start. Regardless of where you do start, though, remember that this is a journey, and what we’re discussing here is how to take the right first step for you.
Now let’s look at a step by step approach of how to implement both strategies.
How to start high and wide, then go narrow and deep
If your approach is to start with a high and wide approach to route contact effectively in order to figure out which areas you can go narrow and deep in, then here’s the steps you should follow.
Benefits and drawbacks of high and wide, then narrow and deep approach
The benefits of working in this way is that:
The drawbacks of this framework are:
Who should use this approach?
This approach is ideal for those who:
This approach has been demonstrated by Homeserve and Marks and Spencer among others.
How to start narrow and deep, then go high and wide
If your approach is to begin with self-service use case automation, then; after you have enough AI agents providing enough value; you’ll want to put an agent across the top to route users into these AI conversations instead of to a human. If you’re going narrow and deep first, here’s the steps you should take:
Benefits and drawbacks of narrow and wide, then narrow and deep
The benefits of this approach are:
And the drawbacks of this approach are:
Who should use this approach?
You should use this approach if you already have customer data in your contact centre related to customer demand i.e. what are people calling about? This means that you can pick out self-service use case is a lot easier and begin delivering that business value sooner.
You should also use this approach if self-service is more of a priority, if you have a particular customer demand issue and routing isn’t as much of a problem.
This approach has been demonstrated by Vodacom and Utilita, among others.
Doesn’t this change with large language models?
You might think that, with the introduction of generative AI and large language models, that this approach will change. Surely, the ‘AI’ will just handle all of it?
Erm, not quite.
The argument for large language models, and the reason why many might tell you not to worry about the above and just throw your data at an LLM using Retrieval Augmented Generation (RAG), is the following: in theory, if you have a boat load of data like PDF documents, word documents, website content, a knowledge base etc, then surely all the answers to all of your customer queries can be found in there?
With a large language model, and a RAG set up, the alleged value is that you don’t need to worry about figuring out what your customers will ask you, as long as you have the data to answer it anyway.
‘Traditional’ NLU meant that you need to do the opposite: define all of the conversations you want to have up-front, then find the training data to build the model, and then find the content to answer the questions. You’re curating training data and curating content to match.
With an LLM + RAG set up, you’re simply curating content and not worrying about what questions your users have. In theory, as long as you get the content right, all answers should be in there somewhere.
While this may be the case, LLMs + RAG will get you the FAQ element of your assistant, and possibly signposting, if you implement it right. It won’t enable routing without falling back to rules and won’t enable transactions without the same, plus a lot more context management.
Therefore, if you view your automation use case as an end to end user journey, and your AI program goals being to automate journeys, then you’ll realise that large language models play a role in that journey, but they’re not the whole journey. Remember, journeys are everything your user needs to know or do throughout their relationship with your brand.
Large language models are no reason to skip any steps in this guide.
If you skip these steps, what you’re likely to do is build an LLM + RAG bot that answers questions from your knowledge base. Then what? What happens when a user says ‘Yeah, go on then, I’ll book a room'? What happens when you need to develop an SMS based use case or a call centre capability? You’re going to have a tech stack specifically crafted to do one thing and one thing only. Before you get to your second use case, you’re going to have to rethink your whole approach.
Large language models give us the capability to make our high and wide assistant, the front door, with much greater finesse. They don’t give us the capability to automate those narrow and deep use cases reliably without being combined with traditional machine learning and business rules today.
Conclusion
There is no right or wrong answer with how to approach implementing AI agents in your contact centre. Companies have had great success using both approaches. You just need to work out what your business priorities are, how are your AI strategy links to your wider transformation goals, and then what use cases become a priority according to that? From there, you can decide which approach will best help you meet your needs.
Find information on how we can help you formulate your AI strategy, identify your roadmap, technology requirements, team, resources and approach, consider reaching out to us for a free consultation.
Also, if you are already on with this I would like some insight into your level of conversational AI maturity, take a free maturity assessment.
About Kane Simms
Kane Simms is the front door to the world of AI-powered customer experience, helping business leaders and teams understand why AI technologies are revolutionising the way businesses operate.
He's a Harvard Business Review-published thought-leader, a LinkedIn 'Top Voice' for both Artificial Intelligence and Customer Experience, who helps executives formulate the future of customer experience ad business automation strategies.
His consultancy, VUX World, helps businesses formulate business improvement strategies, through designing, building and implementing revolutionary products and services built on AI technologies.
Marketing Director at Parlance I Women Leaders of Conversational AI, class of 2023
2 天前What a great article, Kane Simms! I enjoyed reading it.
Experience Innovation | AI for CX | Customer Journey
1 周Great info per usual, Kane Simms. I couldn't agree more that you first need to understand your use cases and take a data driven approach to prioritization. You need returns from those first successes to sustain an on-going automation transformation program.
Vice President, AI Evangelist @ Verint | Crafting AI Strategy
2 周Great walk through Kane Simms ?? More of these please ?? Was that podcast really six years ago ??
Leading AI Journeys & Transformations
3 周Great analysis Kane Simms! When I work with enterprises and their call centers, we try to combine these approaches. We want the high and wide approach for NLU and understanding. . . and then we want to go narrow and deep for the use cases/automations that will result in ROI. But without the high and wide in the beginning you may actually limit how much learning the AI can do.
Helping Online Marketplaces and Agencies Scale Rapidly & Increase Efficiency through software integrations and automations
3 周sounds like you've got some solid strategies lined up! what’s the first approach?