Kore AI's 3 layers of conversational AI

Kore AI's 3 layers of conversational AI

Everybody’s talking about Large Language Models (LLMs) today. But they don’t solve every problem. Like every great tool, they excel in certain areas but they can’t do everything.

So how can we use them strategically in enterprise automation and conversational automation initiatives??

According to Raj Koneru , CEO, koreai , there are three layers to enterprise conversational AI platforms. Understanding these layers will help you understand the role of Large Language Models and generative AI in enterprise conversational automation strategies, and how they can be used effectively.

Core Platform

The top layer is the core platform. The stuff that any conversational AI technology stack needs to have; the bare bones of what it takes to build a bot. This layer consists of intent recognition, entity extraction, dialogue management, prompt generation, context management and business rules.?

If you have a chatbot, then you already have these capabilities. If you have a voice bot, you’ll have these too, plus automatic speech recognition and text to speech components. And all require deployment channel integrations, as well as fulfilment integrations.

When you consider conversational AI vendors, there’s a good chance they will offer you a version of this top layer.?

Tuned LLMs

Underneath the core platform capabilities, you have the first example of where an LLM can be used. For Kore AI, this is a tuned open source LLM with embeddings.?

This means that you can use this LLM for things like intent classification and it could perform more accurately than a general purpose LLM or intent model. This means it could be better for your specific business use cases.?

These tuned LLMs enable you to use one-shot or few-shot training methods to enhance its ability to understand users.?

They can be used either instead of your core intent-based NLU system, as an enhancement or as a fallback i.e. utterances are devolved to the tuned LLM when confidence from the traditional NLU is low.

General Purpose LLM

At the base of the stack, Kore AI has general purpose LLMs (Open AI’s GPT3 API plus some open source LLMs), trained on world data. These are the most generic of all LLMs.?

The benefits of general purpose LLMs are that they’re ready to use straight out of the box with no additional training. They just require specific prompt engineering, most of which is actually handled under the hood by the core platform.

The general purpose LLM in this stack is used as a fall back and a last resort. If the Core Platform can’t classify something, and the Tuned LLM can’t either, devolve to the General Purpose LLM.?

Considerations when using LLMs in conversational AI tech stacks

Something to be aware of is that LLMs are only as good as the data they’re trained on. General purpose LLMs tuned on world data aren’t tuned for your business.?

According to Raj, when you use them, you have very little control over the security and privacy of the data you send to an LLM. They’re unpredictable too – they’re constantly updated so you might discover they’re working completely differently tomorrow from how they did today.?

That presents a risk if you’re planning to build a conversational system with LLMs front and centre.

Managing risk

These risks can be managed, and Kore AI’s approach in not putting all the emphasis on the LLMs is smart. Having a suite of LLMs working in parallel means they get used when they’re the best fit for the job at hand. None of them are the top dog and there isn’t an over reliance on an emerging and rapidly evolving technology.

Benefitting from today, thinking of tomorrow

Those LLMs on the bottom layer are constantly changing. While ChatGPT gets all the headlines, and the underlying APIs in GPT3 get integrated into every technology platform known(!), other LLMs are readying their next releases. GPT4 is just around the corner, for example.?

Then, there are great open source LLMs available that can be tuned with embeddings, as we’ve highlighted here. These can already perform better for your specific use cases, and will continue to evolve, too.?

This approach isn’t the only way that you could use LLMs in a conversational AI technology stack, and as the technology continues to evolve, we’re confident that further use cases will evolve and emerge with it.?

However, Kore AI lays out a sensible methodology for harnessing the power of LLMs today, whilst maintaining the integrity of tried and tested technologies and frameworks, and leaving the door open for tomorrow’s possibilities.


We’ll be diving into more detail about the role of Large Language Models in conversational AI in our up and coming webinar. Register here .


For more information on Kore.ai, you can?book a demo ?with the team or?book a free consultation .


This article was written by Kane Simms and Benjamin McCulloch .

About Kane Simms

Kane Simms is the front door to the world of AI-powered customer experience, helping business leaders and teams understand why voice, conversational AI and NLP technologies are revolutionising customer experience and business transformation.

He's a Harvard Business Review-published thought-leader, a top?'voice AI influencer'?(Voicebot and SoundHound), who helps executives formulate the future of customer experience strategies, and guides teams in designing, building and implementing revolutionary products and services built on emerging AI and NLP technologies.

Nicolle Merrill

I teach AI literacy skills for the workplace

1 年

Nicely summarized, especially the conclusion “Having a suite of LLMs working in parallel means they get used when they’re the best fit for the job at hand” ????

要查看或添加评论,请登录

Kane Simms的更多文章

社区洞察

其他会员也浏览了