Pursuing a Responsible AI Chatbot Interface

Pursuing a Responsible AI Chatbot Interface

Chatbots are among the most widespread applications of Generative AI, ranging from general-purpose platforms like ChatGPT to highly specialized solutions tailored by companies for specific industries. Businesses are racing to deploy chatbots in various domains—especially in customer service—to enhance quality and efficiency. While creating a chatbot capable of responding to messages is relatively easy, ensuring it is safe, ethical, and responsible for public use presents a more significant challenge.

Responsible AI seeks to minimize risks while maximizing benefits. For AI developers, a critical focus should be on how humans interact with AI systems (see this article for more detail). This post will outline the essential elements of human-AI interaction necessary for building responsible chatbots.

Note: The post only focus on user interface and experience, the human-interaction part, for responsible AI. I am leaving the technical part for another post.

"Evaluating social and ethical risks from generative AI" by Deep Mind

Identify the Risk of Error

Understanding potential errors is crucial. What could go wrong? In a low-stakes scenario, like a movie recommendation chatbot, the worst outcome might be a user disliking the suggestion. However, in high-stakes contexts—such as a medical chatbot assisting with potential cancer diagnoses—the consequences could be severe. High-stakes chatbots must account for and mitigate these risks. You need to understand what are the risk for false positive, and false negative.

https://www.dhirubhai.net/pulse/which-worse-false-positive-false-negative-miha-mozina-phd/

AI-Human Augmentation

For high-stakes applications, AI should support rather than replace human expertise, called AI-Human Augmentation. In healthcare, for instance, a doctor should have the final say, with AI acting as an assistant to enhance speed or accuracy. One simple example of this is a response recommendation by Salesforce as can be seen in the picture below. AI helps to generate some recommended response meanwhile customer service agent only need to click one of the response to be sent to the customer.

https://trailhead.salesforce.com/content/learn/modules/einstein-reply-recommendations-for-service/get-to-know-einstein-reply-recommendations

Disclose You are an AI

Transparency is one of the most important aspect in AI. Don't pretend you are a human meanwhile you are using AI to respond. Expose you are an AI to manage expectation of the user. This will lower expectation the user to the general "chatbot" level expectation. This will also avoid a disappointment when user found out it is a bot instead of a human.

https://interface.ai/trusted-ai-bot-disclosure-privacy-and-best-practices/

Give User Ways to Give Feedback and Raise Issue

How do we know if the chatbot is doing good or not, so that we can improve? How do we know if the chatbot is making a problem when there's no way for user to raise issue? For both reasons, you have to enable user to give feedback. If the result is not that good, user can give a thumbs down. But when the result is out of the line, user must be able to raise an issue.

User feedback on Gemini by Google

Escalation to Real Human

When chatbot doesn't solve the user's problem, it is a good interaction design to enable user to escalate to real human so that user proceed to solve the issue. The example below "Chat dengan Penjual" (Chat with the Seller) feature would help a frustrated buyer. Meanwhile common questions can be answered by the chatbot so seller is not overwhelmed.

Escalation on Shopee

Disclose Possible Mistake

It is important for the users to know that your chatbot might make possible mistakes. Especially if the chatbot is built for general purpose like ChatGPT.

ChatGPT

Closing

I believe People + AI Guidebook developed by Google PAIR (People AI Research) is the most complete guide for you to develop a safe and responsible AI from Human-Centered AI point of view. This post is taking some points from the guidebook and from my own experience. Please take a look at this guide book with >20 patterns you should be aware of when developing AI solutions such as chatbot.

要查看或添加评论,请登录

Rendy Bambang Junior的更多文章

  • What to Expect from a Good RAG System

    What to Expect from a Good RAG System

    Chatbot is the most common implementations of LLMs. One of the biggest problems in chatbot is hallucination.

    1 条评论

社区洞察

其他会员也浏览了