Do NOT Deploy THAT Chatbot!
Adobe Firefly

Do NOT Deploy THAT Chatbot!

But, DO deploy THIS chatbot! One of the most touted applications of generative AI and LLM models is the chatbot. From when ChatGPT first hit the scene, organizations of all types have been keen on making use of chatbots to better service customers. While LLM models and applications are quickly and continuously improving, there are still some significant risks that come with the deployment of a customer facing chatbot. In this blog, I’ll lay out an alternative, human augmented approach that is much less risky yet still quite powerful. I will take the concepts from a prior blog further. So, do NOT deploy THAT chatbot … but DO deploy THIS chatbot!

?

The Risks Of A Customer Facing Chatbot

I’ve written in the past about the issues with hallucinations and how they are unavoidable since they are a feature, and not a bug, of generative AI and LLMs!? Aside from hallucinations being sent back to a customer, it is also very tricky for an LLM to parse and understand complex questions a customer may have. A question might also require multiple systems to be accessed to retrieve the proper answer. I like to use the example of asking for hotel availability, rate, and cancellation policy. You want to look up the factual answers in the underlying systems. You don’t want a probabilistic guess at an answer provided regardless of whether it is a person or a chatbot making that guess.

Another risk is that some people will try to game your chatbot. Examples abound of people tricking chatbots into agreeing to things no person would. There is a great story about a customer negotiating a new car from a dealer’s chatbot for only $1 with a commitment literally for “no taksies backsies”! All this goes to say that deploying a customer facing chatbot that provides answers to customers is not a great idea as of today.

?

Keep The Ingestion, Drop The Response

Instead of focusing on answering a customer’s questions directly, there are ways to make highly effective use of chatbots by focusing on making your question resolution process more streamlined and efficient. This involves initial ingestion of customer requests as well as generation of possible responses for your human agents.

We all get tired of the minute-long message we’re forced to listen to before we can make a selection. “Press 1 for payments, press 2 for warranty claims…”. Customers must waste time before making their pick. Instead, ask customers what they need and allow them to explain verbally. Then, have your chatbot interpret the request and figure out where to best route the customer.

The chatbot can also summarize the request the customer made, along with suggestions of how to answer, based in part on one or more retrieval augmented generation (RAG) based searches. What if the customer needs to talk to two or more departments? No problem! The chatbot can also tell the first agent to pass the customer to a second agent when done. It’s a better experience for all.

Inherent in the above is that the chatbot IS taking a stab at getting customers their answers. However, the response is routed to a human agent rather than the customer. In cases where the agent can validate the answer is sufficient, it can be quickly delivered. Otherwise, the agent can do some more research. The worst case is that the agent takes as much time as they otherwise would have on the call. However, most calls will have time saved.

?

Human Augmented Chatbots: Maximizing Functionality, Minimizing Risk

By focusing on human-augmented chatbots, you can get the most functionality with the lowest risk. To the extent your chatbot is highly successful at providing proposed resolutions to the human agent, those agents can quickly handle each customer and move on to the next. While interacting with the customers, the agents will also be able to identify other potential needs to increase satisfaction, which adds value to the customer beyond just an automated response from the chatbot. To the extent your chatbot is struggling, human agents can override it and do some of the work they otherwise would have done. They can also provide feedback on the chatbot’s responses so that the models can continuously improve.

In the end, does a customer care if their answer comes from a chatbot, a person, or a chatbot assisted person? No! They just want answers. And many will still want the ability to talk to a person. Having the internally facing chatbot functionality gives customers almost all the advantages of an externally facing chatbot, but with the human touch of an agent still involved.

All it takes is to tweak the stated goal from “implement a chatbot to interact with customers” to “implement a chatbot that improves customer interactions”. A slight change in semantics can make all the difference in terms of feasibility, risk, and success!

Daniel Graham

Technical Marketing, Independent Consultant, DBA

5 天前

Excellent insight into #AI #LLM #chatbot use.

回复
Ali Khalid

Heading Data Quality @ Emirates Airline | Transforming Analytics | Award winning speaker

3 周

This makes a lot more sense, usually chatbots are irritating, hardly mature enough to make the experience helpful.

回复
Hassan Wasim

Offshore Engineering, AI Solutions | Computer Vision Solutions | & GCC Setup Specialist | Solutions Architect | Delivering 55% Cost Reductions | CodeNinja.

3 周

Totally agree! It's all about using chatbots to make conversations better, not just to handle them.

回复
Jim Ryan

Business Intelligence Analyst | Python | R | SQL. I help companies to reduce costs by 10% through the use of analytics

3 周

I have not heard this perspective before but it makes a lot of sense. You can't trust all the answers you get from a LLM.

回复
Susan Mathews Hardy

Mentor of Impactful Research and Clear Presentations | Marketer of Data Scientists | Senior Lecturer

3 周

I loved reading this. I have had fun learning about prompt engineering this semester from Xuelei (Sherry) Ni and this article is timely for her class. Thanks Bill Franks!

要查看或添加评论,请登录

Bill Franks的更多文章

社区洞察

其他会员也浏览了