The Three Laws of Chatbotics
Who has not heard about Isaac Asimov’s Three Laws of Robotics? As you might recall, they are as follows:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Remarkably, these laws operate at a level of abstraction that current AIs, robots included, are still unable to grasp. Currently robots simply cannot work with such rules, because they require a deep understanding of the full range of human cultures, experiences, languages, and the multiple moral dilemmas they imply. Those words can mean totally different things in different circumstances, and a computer might be left helpless to act as it would be supposed to. In short, the Laws are ingenious and make for nice fictional stories, but, when it comes to real machines, they are impractical. There is still a long way to go to find high-level guidelines that any robot can understand and use to produce positive and safe behavior.
(It is amusing that a superb thinker and futurist like Asimov did not realize how intelligent and all-knowing an intelligence must be to act upon instructions of such a high level. Even a human intelligence, who would probably claim to understand the Laws, would have a very hard time enacting them.)
Fortunately (in this context), chatbots are not yet fully-fledged AIs and they are usually intended to provide help only with specific issues, so, as far as they are concerned, it is easier to translate social concepts into quantifiable and operational actions.
With this in mind, i.e. that a chatbot has a specific and well-defined purpose (like booking a plane ticket or describing a product), and based on our overall knowledge of human psychology, I would dare to tentatively propose the following three general “Laws of Chatbotics:”
- A chatbot shall strive to obey humans except when the orders would conflict with, or divert from, its specific purpose.
- A chatbot shall strive to reward good behavior by humans, understanding “good behavior” as any action that leads it to fulfill its specific purpose.
- A chatbot shall strive to provide humans with control, understanding “control” as the means of monitoring progress and influencing the chatbot so that it can achieve its specific purpose.
Please allow me to explain these Laws:
The First Law is basic; it only goes a bit beyond the obvious to state that the chat should not wander away from the chatbot’s objective. If the chat’s logic flow is disrupted, the chatbot shall politely get it back on track. Consistency is essential.
I posit that the Second Law is also fundamental. For the chat to be engaging, the human must get little rewards (or feedback) every time they provide relevant information or cues that allow the bot to move forward. The rewards should be immediate and come in the form of the million variations of a “thank you” message, emoticons (thumbs up, smileys, etc., depending on the formality), comparison charts that tell the human how well they are doing compared to other users, etc. Our brains are wired to thirst for social incentives.
The Third Law is no less important. The human must permanently be kept aware of how much progress they have made so far in the process: “I understand you want a plane ticket to Paris for tonight from Stockholm. I just need to know what seat class you would like to book.” Additionally, the user must know what actions are available at any time, e.g. “Remember you may cancel at any time,” “Don’t worry, your session will be saved if you leave,” “Your purchase will be completed only when you click on ‘Buy’,” etc. In general, the chatbot shall empower the human as much as possible. Empowerment stands for the opposite of helplessness and being empowered means having the ability to affect a situation and being aware of what you can and cannot do. Leverage buttons and quick replies! Again, a sense of control is compatible with human nature.
There are other common-sense directions, e.g. provide a proper introduction of the chatbot, set appropriate expectations, offer accurate information, comply with legal requirements, be polite, etc. but the Laws are intended to be of a more overall nature; the idea is that they should act as a safeguard to keep the chatbot and the human focused, efficient and effective.
It is also understood that the chatbot should learn from its audience by collecting key information, to progressively make the dialog more and more relevant, but this is a hard-to-quantify, AI-related procedure that perhaps should not (yet?) be made into a Law.
What do you think? Are these laws a good start? Are they universal enough to be useful for most task-oriented chatbots? Could you come up with other high-level but quantifiable rules that a chatbot should always follow?