2 of the best approaches to AI automation for contact centres: Deep dive

2 of the best approaches to AI automation for contact centres: Deep dive

When beginning to consider how and where you can use AI within your contact centre, you’ll first need to define your strategy . Part of that will be figuring out which of the 8 top contact centre AI use cases to start with. Then, you need to decide whether you go ‘high and wide’ or ‘narrow and deep’ with your use cases. This article will define what those two tactics are, detail how to decide the right approach, and a step by step instruction on how to implement each approach.

It will cover:

  1. What is a high and wide vs narrow and deep approach?
  2. Should you start high and wide or narrow and deep?
  3. Steps to follow for a high and wide strategy
  4. Benefits and drawback of a high and wide approach
  5. Steps to follow for a narrow and deep
  6. Benefits and drawbacks of a narrow and deep approach
  7. Why large language models and generative AI don’t change this approach

What is a high and wide vs narrow and deep approach to AI automation?

A high and wide approach is where you have an AI solution that has a wide understanding of customer needs. It's the first thing your customer will interact with when they call or reach out on any channel. It understands the breadth of things that your business does, and can understand which of those things your customer needs, based on their initial or first few utterances. Think of this as the front door to your business.

The job of a high and wide AI assistant is to understand what your user needs, then route the user to the right place to have their issue resolved, such as a self-service journey, or to the right agent/skill set that can help them.

A high and wide AI assistant won’t automate transactions fully, it will sign-post and route users to the channels or people that can help.

Narrow and deep is where you have an AI solution that covers a specific use case from end-to-end. Think of this as transactional automation where your AI agent enables users to complete tasks and get things done. Checking the status of an order, changing their address, making a claim, freezing a bank card, booking an appointment, for example.

Over time, what you’ll find, and what you’ll aspire to, is a high and wide assistant that hands off to narrow and deep assistants. Think of this as a ‘master AI agent’ sitting on top of a series of task-based AI agents. We’ve written in the past about how to decide whether to use this master or multi-AI assistant approach .

Should you start with a high and wide or narrow and deep approach?

The best way to decide your approach should be to conduct a broader contact channel analysis . This will make sure that you’re covering all of your potential use cases and following our AI Design Principles by deciding the best approach based on data. Once you understand all of your use cases, you can prioritise the approach that will give you most value.

Let’s look at some examples.

  1. Let’s say you do some digging and find that you have high transfer rates on your IVR. Then, you might consider a high and wide approach to more effectively route callers to the right agent.
  2. Perhaps you look at your use cases and realise that you already have a bunch of self-service journeys that cover these use cases and work pretty well in your app or on your website. Here, you might decide to go high and wide and use AI to direct users to those existing self-service options.
  3. Maybe you have a high volume of calls about a particular topic that you strongly suspect can be automated, and that doing so would bring sufficient value. Perhaps then you go narrow and deep on this use case and seek to automate it from end-to-end.

Having this kind of data will help you make the right decision on where to start. Regardless of where you do start, though, remember that this is a journey, and what we’re discussing here is how to take the right first step for you.

Now let’s look at a step by step approach of how to implement both strategies.

How to start high and wide, then go narrow and deep

If your approach is to start with a high and wide approach to route contact effectively in order to figure out which areas you can go narrow and deep in, then here’s the steps you should follow.

  1. Design. Use your contact channel analysis as outlined in our AI strategy definition article to gather training data that describes, in your customers language, what they're trying to achieve. This will give you the foundational data you need to train an NLU model or fine tune a language model to understand your caller's intent.
  2. Train. Train an initial language model to recognise an intent based on the caller's first utterance. Given that you already have lots of data, I'd recommend beginning with an ML-based NLU system and test it to make sure you have a decent level of accuracy. If you don't have the data you need to train this model from Step 1, you could consider using a large language model to either generate you the data, or to stand in as a classifier for your V1 MVP.
  3. Soft launch. Launch this language model into your IVR to a small percentage of callers, and don't have it conduct routing initially. Just have it listen for customer intents, and then escalate to an agent as you normally would. Here, what you are doing is testing whether customer utterances are are being accurately classified in a low risk environment. What makes this low risk is that you're not acting on the data. You're just listening to see whether you're classifying correctly. From here, you can refine your model until it reaches an accuracy level that is acceptable for you.
  4. Launch. Now you can begin rolling it out to a wider number of callers, and implement escalations based on intent or skillset. Here, rather than escalating all customers to the same agent pool, you are using the intent of the customer to route the customer to the appropriate department, team or skill set. This is where you will begin to see your first phase of value, removing internal call transfers and potentially reducing wait times.
  5. Signpost. If you have some effective self-service journeys already in place on other channels, now is where you'll begin to signpost callers to those channels. The most effective way of achieving this is to trigger the sending of an SMS message with a link to the appropriate online resource.
  6. Improve. Monitor the traffic and traction of each of your intents in order refine and improve accuracy over time, and to prioritise where your opportunities are to introduce your self-service AI agents.
  7. Narrow and deep. Using your most common intents, or finding the use cases that would add the most value to your business, begin designing out those end-to-end automated conversations. You can then deploy these AI agents in your IVR, taking the place of a signpost or escalation.
  8. Scale. As you begin to add more and more self-service AI agents, you'll be escalating to your human agents less and into your AI agents more. You'll also likely improve the resolution rate of your use cases that are currently being signposted by turning them into conversations. One conversation at a time, you'll scale to these conversations into the IVR, until you arrive at your high and wide AI agent being the front door to your narrow and deep agents, with your human agents filling in the gaps.


Starting point. A 'high and wide' AI agent that routes calls to self-service channels and live agents.


End result, after implementing 'narrow and deep' use cases that follow.

Benefits and drawbacks of high and wide, then narrow and deep approach

The benefits of working in this way is that:

  • You develop a broad understanding of all of your customer needs right from the off and instil a data-driven approach to CX (link to AI design principles).
  • You base all of your strategy for automation of the back of genuine customer demand
  • You begin to make lives easier for agents by reducing the need for them to make internal transfers
  • Less call transfers can mean an increase in first time resolution and potentially lower average handle time

The drawbacks of this framework are:

  • Increased time to value. Building a language model first and implementing routing before automation of self-service use cases means that it takes longer to get to the self-service delivery part.

Who should use this approach?

This approach is ideal for those who:

  1. Already suffer from internal routing issues and with getting customers to the right kind of agent with the right skill sets, and have a lot of internal call transfers.
  2. Do not have any data, such as call recordings or transcripts, to profile contact drivers and prioritise self-service automation use cases.

This approach has been demonstrated by Homeserve and Marks and Spencer among others.

How to start narrow and deep, then go high and wide

If your approach is to begin with self-service use case automation, then; after you have enough AI agents providing enough value; you’ll want to put an agent across the top to route users into these AI conversations instead of to a human. If you’re going narrow and deep first, here’s the steps you should take:

  1. Plan. Utilise the call centre data you already have to profile where most of your customer demand is coming from. Once you have prioritised your customer needs, you can select your self-service use cases for your roadmap and find the right place to start. Check out our AI strategy definition article for deeper insights into this.
  2. Design. Begin with the use case that will bring the most consistent customer and business value, and design and implement an end to end self-service conversation. Follow the VUX framework to make sure you build the thing right.
  3. Implement. Keep your current IVR set-up (most likely DTMF) as it is, and implement your self-service AI agent behind the IVR menu. This way, when your customer presses 5 to request a refund (for example), they will hit your self-service AI, instead of being escalated to an agent.
  4. Improve. Continually improve that self-service digital assistant, whilst in parallel, working on designing and implementing your next self-service use case(s). Following the same process and implementing one self-service agent at a time behind your IVR menu.
  5. Unite. Once you have enough self-service AI agents sitting behind your IVR, you can then use your existing use cases to build an orchestration AI that acts like a voice enabled IVR. This means that customers who call can explain what they need, then be escalated to one of your self-service AI agents. Failing that, they will be escalated to another self-service option or your live agents.

Starting point for the 'narrow and deep' approach. One AI agent behind traditional IVR menu.


End result. Orchestration AI agent over the top of a series of self-service AI agents.

Benefits and drawbacks of narrow and wide, then narrow and deep

The benefits of this approach are:

  • Decreased time to value. By implementing self-service assistance from the off, you are potentially delivering substantial business value sooner.
  • Decreased wait times. Rather than having to wait to speak to someone, customers can get their issue resolved immediately.
  • Efficiency. Handling calls from end-to-end means that fewer calls reach live agents, improving overall efficiency and scalability potential.

And the drawbacks of this approach are:

  • Increased complexity. These kind of use cases are typically more complex because they require multi-turn conversations and therefore more sophisticated language models and context management.
  • More up-front resource. Because of the increased complexity, typically these projects require collaboration from different parts of the business, especially where you need to integrate into other systems.

Who should use this approach?

You should use this approach if you already have customer data in your contact centre related to customer demand i.e. what are people calling about? This means that you can pick out self-service use case is a lot easier and begin delivering that business value sooner.

You should also use this approach if self-service is more of a priority, if you have a particular customer demand issue and routing isn’t as much of a problem.

This approach has been demonstrated by Vodacom and Utilita, among others.

Doesn’t this change with large language models?

You might think that, with the introduction of generative AI and large language models, that this approach will change. Surely, the ‘AI’ will just handle all of it?

Erm, not quite.

The argument for large language models, and the reason why many might tell you not to worry about the above and just throw your data at an LLM using Retrieval Augmented Generation (RAG), is the following: in theory, if you have a boat load of data like PDF documents, word documents, website content, a knowledge base etc, then surely all the answers to all of your customer queries can be found in there?

With a large language model, and a RAG set up, the alleged value is that you don’t need to worry about figuring out what your customers will ask you, as long as you have the data to answer it anyway.

‘Traditional’ NLU meant that you need to do the opposite: define all of the conversations you want to have up-front, then find the training data to build the model, and then find the content to answer the questions. You’re curating training data and curating content to match.

With an LLM + RAG set up, you’re simply curating content and not worrying about what questions your users have. In theory, as long as you get the content right, all answers should be in there somewhere.

While this may be the case, LLMs + RAG will get you the FAQ element of your assistant, and possibly signposting, if you implement it right. It won’t enable routing without falling back to rules and won’t enable transactions without the same, plus a lot more context management.

Therefore, if you view your automation use case as an end to end user journey, and your AI program goals being to automate journeys, then you’ll realise that large language models play a role in that journey, but they’re not the whole journey. Remember, journeys are everything your user needs to know or do throughout their relationship with your brand.

Large language models are no reason to skip any steps in this guide.

If you skip these steps, what you’re likely to do is build an LLM + RAG bot that answers questions from your knowledge base. Then what? What happens when a user says ‘Yeah, go on then, I’ll book a room'? What happens when you need to develop an SMS based use case or a call centre capability? You’re going to have a tech stack specifically crafted to do one thing and one thing only. Before you get to your second use case, you’re going to have to rethink your whole approach.

Large language models give us the capability to make our high and wide assistant, the front door, with much greater finesse. They don’t give us the capability to automate those narrow and deep use cases reliably without being combined with traditional machine learning and business rules today.

Conclusion

There is no right or wrong answer with how to approach implementing AI agents in your contact centre. Companies have had great success using both approaches. You just need to work out what your business priorities are, how are your AI strategy links to your wider transformation goals, and then what use cases become a priority according to that? From there, you can decide which approach will best help you meet your needs.

Find information on how we can help you formulate your AI strategy, identify your roadmap, technology requirements, team, resources and approach, consider reaching out to us for a free consultation .

Also, if you are already on with this I would like some insight into your level of conversational AI maturity, take a free maturity assessment .


About Kane Simms

Kane Simms is the front door to the world of AI-powered customer experience, helping business leaders and teams understand why AI technologies are revolutionising the way businesses operate.

He's a Harvard Business Review-published thought-leader, a LinkedIn 'Top Voice' for both Artificial Intelligence and Customer Experience, who helps executives formulate the future of customer experience ad business automation strategies.

His consultancy, VUX World , helps businesses formulate business improvement strategies, through designing, building and implementing revolutionary products and services built on AI technologies.

  1. Subscribe to VUX WORLD newsletter
  2. Listen to the VUX World podcast on Apple , Spotify or wherever you get your podcasts
  3. Take our free conversational AI maturity assessment



Alexandra Karasic

Marketing Director at Parlance I Women Leaders of Conversational AI, class of 2023

2 个月

What a great article, Kane Simms! I enjoyed reading it.

回复
Tim Friebel

Experience Innovation | AI for CX | Customer Journey

2 个月

Great info per usual, Kane Simms. I couldn't agree more that you first need to understand your use cases and take a data driven approach to prioritization. You need returns from those first successes to sustain an on-going automation transformation program.

Frank Schneider

Vice President, AI Evangelist @ Verint | Crafting AI Strategy

2 个月

Great walk through Kane Simms ?? More of these please ?? Was that podcast really six years ago ??

回复
Joshua Schechter

Leading AI Journeys & Transformations

2 个月

Great analysis Kane Simms! When I work with enterprises and their call centers, we try to combine these approaches. We want the high and wide approach for NLU and understanding. . . and then we want to go narrow and deep for the use cases/automations that will result in ROI. But without the high and wide in the beginning you may actually limit how much learning the AI can do.

Anand Bodhe

Helping Online Marketplaces and Agencies Scale Rapidly & Increase Efficiency through software integrations and automations

2 个月

sounds like you've got some solid strategies lined up! what’s the first approach?

回复

要查看或添加评论,请登录