Deep Context Switching in Conversational AI
Consider the following transcript between a user and a healthcare Chatbot, designed to check up on an elderly patient on a daily basis. (names and other details have been modified).
“B” denotes Bot and “U” denotes user (Frank)
The Chatbot in question is composed of several Conversational Components, covering various possible contexts that may arise in a convo. The Entry Component issued the first 4 responses, covering the topic of the user’s sleep quality (needless to say, this component covers many more options around possible inputs about the user’s sleep).
The user’s 4th input was received after the Entry component had finished its task and gathered the info about Frank's quality of sleep (and reasons). The “I’m just lonely” invoked a quiz component designed to entertain the user, who indeed agreed to participate.
However, at this point, something happened: Instead of answering the first quiz question, the user said “I have a message for my doctor”. This is a crucial point in Conversational AI: A certain topic is being discussed, and in its midst, before the topic is exhausted, the user produces an input that brings up another topic.
Switching contexts in the middle happens when the context is being interrupted in favor of a higher priority one. After the context is switched and exhausted, a decision must be made, if the previous context that was paused in the middle should be resumed or dropped altogether.
While talking to a Chatbot, the user can deliberately break out of context in 3 main ways: He can produce a “Goodbye” intent (like saying “bye” or “see you later”), or he can produce a “Topic exit” intent (like saying “enough about this” or “let’s change the subject”). The 3rd possibility is less prevalent, and it involves the user bringing up a context which is indeed covered in the content, although not the one active now. I like to call these inputs “strong keywords”, or “strong intents”: Inputs that invoke important topics even at the cost of breaking the current topic of discussion.
Returning to our healthcare example, the current context in turn 5 was the quiz, but the user said: “I have a message for my doctor”. This is clearly a “strong intent”, where a healthcare Chatbot is involved. This is important. Therefore, strong intents are recognized even before the current component (quiz) has a chance to continue the flow. Control is now handed over to another component, which handles messages to the medical staff. Once that new context is exhausted, control returns to the quiz component.
AnnA, a Companionship Bot promoted by Jason Gilbert, is almost entirely made of a large multitude of components, all standing by, waiting to be invoked by a user’s input. Since the users often do not suggest discussion topics, AnnA has a long list of conversation topics she brings up spontaneously. Some of these topics are considered by AnnA more important than others. These are conversation topics which warrant breaking out of anything else and focusing on. One of these topics is a component which handles depressed users with suicidal tendencies. phrases like “I want to kill myself” are a strong intent that will invoke this component in any point of the convo.
Strong intents are system-wide. They must be a part of any Chatbot which uses any kind of componentized framework, with multiple competing contexts. Fine tuning of complex deep context switching involves decisions like when to drop a previous context instead of returning to it (often based on the length of the distraction), or if to continue the previous context where it was interrupted, or to start it over.
Very few Chatbots on the market can perform deep context switching in an elegant way. Typically, you would need to finish whatever you’re discussing with the bot, and then, while in a root position, bring up the new topic.
The use of Conversational Components (CoCos) with dynamic context switching can turn any special purpose Chatbot into a multi-purpose one, without losing the functionality and precision of a focused Bot. More at https://cocohub.ai.
Founder, CEO at Ai R&D
4 年Totally right. But that was the way it happened, and it does demonstrate the concept of context switching.
Building AI for Health and Wellness @ Headspace
4 年The topic switch back to the joke seems misplaced. That last turn “9B” would have benefited from grounding the user input by repeating the message that would be shared with the doctor. Missed opportunity to engage the user about why they were requesting a medication change as well. That’s one of the key issues with relying on these single turn intent-to-action mappings.