Insight of the Week: How to use ChatGPT for Customer Service

Insight of the Week: How to use ChatGPT for Customer Service

By Kerry Robinson

Let's face it, ChatGPT probably sounds a lot smarter than your IVR, or website chatbot, right?

So the obvious next question is: how do I get ChatGPT to help my customers instead of the clunky old systems we have now?

Here's how...

1) Prompt Engineering

When you interact with ChatGPT via the OpenAI website, what you're doing, technically speaking, is 'prompting' a Large Language Model (LLM for short. Check out?my video on LinkedIn?if you want to know more about LLMs, and while you're there, hit Follow so you don't miss future videos, or shoot me a connection request do we can get to know each other!)

But what you put into the ChatGPT website isn't exactly what the LLM sees. Behind the scenes, Open AI are taking the text you entered, and forming it into a 'prompt' that includes the history of the conversation you've had so far, and, although they aren't open about it, probably a bunch of other things that help to ensure the response you get is relevant, concise etc. This is called 'prompt engineering' - it's the process of figuring out what text you need to put into the 'prompt' to get the kind of output you, or your customers, want.

This is an exciting new area of research. The way we prompt language models has a big impact on the quality of what we get back. Prompts need to be specific. They need to guide the model on how to best answer the question - often providing a step-by-step plan helps enormously (it's known as 'train of thought' prompting).

We're constantly experimenting, testing, and learning in this area. And sometimes what works is quite surprising: praising a model and telling 'it' (whatever 'it' is) that it's a fantastic teacher will cause it to give better responses than if you prompt the model with a derogatory tone. Seriously. There was a whole study done on it ??

But prompt engineering has its limitations. There's only so much information that you can put into a prompt, because the maximum size of the input and output of the model is limited (it's called the 'context window' and for ChatGPT it's 4000 tokens, or about 3000 words) So you couldn't, for example, put your whole website, or knowledge base into a prompt.

The answer, is to only provide relevant information in the prompt. And for that we can use a Vector Database (an area that is attracting huge sums of VC capital right now)

2) Vector Databases and in-context learning

ChatGPT was trained on most of the internet. And a bunch of other stuff. But its training didn't include any data beyond September 2021. So if you ask it to quote a stock price, it probably won't get it right. And if you ask it about your returns policy or claims process, it might quote somebody else's.

The solution to this is to add information into the prompt. Put the user's question in, along with other relevant context - like your company's policies. But you've got that word limit to consider. And remember, the cost of each request to the ChatGPT API depends on the amount of text you put in, and how much you get out.

To get around that you can first look for relevant information that might help answer the question, and then you can prompt ChatGPT, or another LLM, with the customer's question and a bunch of relevant documents, or even relevant snippets from documents.

That's where Vector stores, or vector databases come in. They store large quantities of information in a way that allows you to retrieve that information based on semantic similarity. It's not a simple keyword search - they actually use the same 'embeddings' model that LLMs use, so they reliably select relevant information from the database. And then you can stuff that into the prompt (Stuff is actually the technical term for that!). If you want to learn more about embeddings, check out my email on?the magic and mystery of embeddings?(I wrote that before ChatGPT, so the focus is more on how platforms like Google Dialogflow use embeddings for call routing)

With prompt engineering, and Vector databases, you can create quite a useful website chatbot. In fact, I'm doing just that with this email series. Yep.. in a week or two, you'll be able to ask a chatbot on the Waterfield website all about AI, and you'll get answers, based on this email series.

That's the power of Generative AI!

No alt text provided for this image

Kerry Robinson?is an Oxford physicist with a Master's in Artificial Intelligence. Kerry is a technologist, scientist, and lover of data with over 20 years of experience in conversational AI. He combines business, customer experience, and technical expertise to deliver IVR, voice, and chatbot strategy and keep Waterfield Tech buzzing.

Subscribe to Kerry's Weekly AI Insights

要查看或添加评论,请登录

社区洞察

其他会员也浏览了