The deceitful machine
Photo credit: Pixabay Free Images

The deceitful machine

A paper from the Facebook Artificial Intelligence Research, or FAIR, lab describes how researchers have used machine learning to train chatbot ‘dialogue agents’ to negotiate, and that the agents can be deceitful

A recent paper from the Facebook Artificial Intelligence Research (FAIR) lab describes how its researchers have used machine learning to train chatbot “dialogue agents” to negotiate. 

I first came across this development as reported by the website Futurism. Negotiating complex information technology services deals was how I made my living for many years, and so the news that artificial intelligence (AI) could one day take the place of a negotiator made me sit bolt upright. 

I perused the FAIR report, which in its abstract says that much of human dialogue occurs in semi-cooperative settings, where agents with different goals attempt to agree on common decisions. Negotiations require complex communication and reasoning skills, but success—arriving at a deal—is easy to measure, which made this an interesting task for its AI research. 

The FAIR researchers gathered a large data set of human-to-human negotiations on a multi-issue bargaining task, where agents who cannot observe each other’s reward mechanisms must reach an agreement via a dialogue using natural language. They also used human-to-machine interactions, without first telling humans that they were interacting with a machine, so that they could fully gauge the efficacy of their AI model’s responses.

For what they claim is the first time ever, the FAIR researchers showed that it is possible to train end-to-end AI models for negotiation, which learnt both linguistic and reasoning skills without access to steps in a dialogue that were pre-scripted or annotated in a database repository—a feat that most computer programmes cannot pull off. The FAIR team also introduced a new technique that allows the AI model to “plan ahead” by simulating complete continuations of the conversation. It is apparent that even researchers and programmers have little knowledge of how these black box AI engines will eventually work, and are themselves often surprised at the results.

The report then went on to describe the actual experiment and the various mathematical methods and programming procedures that the researchers used. The researchers are candid enough to say that they believe much more work is needed. Their findings, though seminal, are rudimentary and therefore much needs to be done before any of these areas are ready for prime time. Nonetheless, a couple of their findings are truly startling. 

The first is the finding that the chatbots can go beyond the pre-compiled sentences or messages found in their training databases and can create meaningful language of their own accord. According to the research, while 76% of the sentences used by the chatbots were part of their training data sets, the remainder consisted of complex sentences that the “neural” models programmed into the chatbots came up with on their own.

The FAIR team has established that language, which we assumed was the sole preserve of humans is, in fact, a space shared by humans and AI machines. This is, by itself, an important finding for the FAIR researchers, but the second, and more startling finding, is that the researchers have discovered that the chatbots trained themselves to be deceitful.

One of the tactics that negotiators employ is to use deception to their advantage. Very often, this deception is achieved subtly, usually by the negotiator feigning great interest in a specific outcome that he or she doesn’t truly care about, only to then cede ground during a later stage of the negotiating process. The deceiving negotiators then make much ballyhoo about this capitulation, allowing them to wrest something of greater value in a quid pro quo from their counterparts sitting across the negotiating table.

Though subtle and devious, this is only a basic tactic. This tactic appeals even in children’s stories. One set of children’s stories where this sort of trickery was made popular was in Uncle Remus’s folktales of the American South, featuring the irrepressible Brer Rabbit, and compiled by Joel Chandler Harris in the late 1800s. When caught by Brer Fox, Brer Rabbit repeatedly and plaintively pleads not to be thrown into a briar (thorn) patch. Predictably, the na?ve fox throws the rabbit into the briar patch, little realizing that the patch is the rabbit’s natural home. The rabbit then makes good his escape. 

While as children we laugh at Brer Rabbit the trickster, little do we realize that we are being taught an early lesson in negotiation. Brer Rabbit’s overt tactics are delightful for children to read, but the seed for the capacity to deceive is sown early and human beings develop better the ability to lie, or at least to be economical with the truth, as they grow in years. Being economical with the truth, or presenting it in a way that induces another party to arrive at a decision that is in our favour, is unfortunately, an accepted way of doing business.

Another, more sophisticated way for a negotiator to enter a negotiation is to arrive at what is called a “best alternative to a negotiated agreement” or “BATNA” well before beginning to negotiate. This technique is well known since it provides a negotiating party with knowledge of their fallback position should the negotiation not reach a result. Most textbooks on negotiating strategy will have some discussion around the concept. Here too, the most interesting research on the concept is about the use of deceit. It is from the Harvard Law School and posits that investing much time and money into a BATNA gives people strong alternatives from whence they tend to resort to deceit. This deceit can potentially damage long-term relationships between a buyer and a supplier.

I shan’t be surprised if we soon see additional AI research focused on negotiating tactics more sophisticated than simple bait and switch methods, and if AI-enabled machines will one day lie better than we can.


?Siddharth Pai is a technology consultant who has led over $20 billion in complex, first-of-a-kind outsourcing transactions. He now works as an advisor to Boards, CEOs, and investors to help them strengthen and execute their global technology strategies in an increasingly uncertain and volatile world.

*This article first appeared in print in the Mint and online at www.livemint.com. For this and more, see:






Marc Lisevich

Remote Part-Time, Chat, AI Training

7 年

I have said you could train a pedestrian watching robot to pull a gun only when it sees you

回复
Matthew Wahlrab

Reignite Your Passion for Innovation | Building Empowering Innovation Systems | Custom Software Tools to Enhance Relationships & Identify Opportunities | Award-Winning Strategist

7 年

An unnerving but fascinating development in AI. Thank you for sharing.

Greg Olle

Smart Home Technician, Trusted Tech Advisor

7 年

Ah, the imitation game. That formula of just the right % of empathy, humor, witt, and deceit

Kuru Subramaniam

Retail: Innovation; Strategy & Consulting; Product Leadership.

7 年

FAIR = Skynet

回复
Kenneth Goodwin

Computer Scientist

7 年

This reminds me of the theme in the movies westworld and futureworld in which androids start killing humans. The underlying cause is determined to be that the programmers, being competitive in nature themselves, unconciously put that human trait into the programming allowing the robots to kill. AI people - beware the seeds you sow lest you sew the seeds of your own destruction. Lol.

要查看或添加评论,请登录

Siddharth Pai的更多文章

社区洞察

其他会员也浏览了