Lies, BS and ChatGPT

Lies, BS and ChatGPT

Lies, BS and ChatGPT

Martin Reeves and Abhishek Gupta

For decades, the elusive metric of the effectiveness for artificial intelligence (AI) was the?Turing Test, which measured whether a human could judge if their counterpart in a dialogue was a human or a machine.?ChatGPT?and other large language models (LLMs) have arguably met or exceeded this standard, generating plausibly human text, a remarkable achievement, with far reaching implications. But technological progress always comes with side effects and by anticipating these we can both better unlock gains, mitigate costs, and identify?new opportunities for innovation.?

This is certainly all true of LLMs like ChatGPT. The ability to generate persuasive text at a fraction of the cost or effort a human would require has the potential to create value in multiple fields, from copyrighting to coding. But it also creates new challenges. Foremost among these are judging the truthfulness of the content and attributing authorship.??

Models like ChatGPT use massive training datasets (such as?The Pile)??to build prediction models which attempt to create plausibly human answers to prompts. They are not systematically untruthful, but rather, are build without regard to the truth. Analytic philosopher Harry Frankfurt explored this idea in his book?On Bullshit. He explained that both the truth teller and the liar are concerned with the truth, one to reveal it and other to conceal it. But a bullshitter aims only to persuade,?without regard to the truth.??He saw bullshit (BS) as potentially more damaging than lies.

Of course, sometimes the objective is transparently only to persuade, as in the case of advertising. Few would regard the claims made in advertisements as complete and provably true. But in other contexts, we might be misled into thinking that a statement is made with an intent of accuracy, balance, and truthfulness, when it has merely been optimized for plausibility and persuasiveness.?

One might object that even the scientist, never knows for sure whether a theory is true.?Karl Popper?proposed that the best that science could do was to propose falsifiable statements and then apply the scientific method to attempt to disprove them. But with the application of LLMs today there is nothing akin to a criterion of falsifiability and a scientific process of challenge and falsification. This could have real human consequences. Imagine that a mother was to rely on a very persuasive text claiming that toxic household chemicals where the cure for an infant’s sickness.?

A related challenge is that the persuasiveness of AI is a property of the model, not of the person who deployed it. In normal social relations we judge the character, knowledge and credibility of a person in part by observing what they say. And we intuitively apply the same heuristics to what we read online. Even if we surmount the challenge of identifying the person who makes the post, we can’t easily know whether the content and argumentation was created by them or by a machine, undermining our ability to contextualize and qualify what is being said.?

The historian of technology?Carlota Perez?has noted that the full impact of a technology is rarely obtained until there is an accompanying social innovation to unlock its value. The electric motor did not transform factory productivity until we reorganized factories and workflow to unlock their potential. Brian Arthur in his book?How Technology Works, explains that we almost never have perfect foresight into new problems and solutions, and technology evolves according to a cumulative, serendipitous process in which parts of existing solutions are assembled in new combinations, only some of which turn out to be highly useful. Nevertheless, we can anticipate this process and at least front-load dealing with the challenges we already know about, such as the ones outlined here.?

Without feigning perfect foresight, it’s reasonable to suggest that we will almost certainly need secondary innovations to unlock the value of new language technologies and that these will likely entail education, technology, and regulation. Many schools are already teaching children that they can’t trust everything they read online and how to qualify and triangulate sources. We will all need to learn new diligence measures to test claims, assess sources and process.?

However, when the models themselves have been trained to be persuasive to humans, this will be hard to carry out without new tools to assist with identity and process verification, truth checking and the like. Furthermore, it’s likely that society may choose to place restraints on certain types of claims where there are potentially significant consequences from following them, as we already do with food and medicines. One cannot claim that a vegetable is organic without the growing process being certified, and one cannot make medical claims without registration based on double blind, placebo-controlled trials. For all of the euphoria of technology companies to acquire LLM technologies, they would be well advised to look ahead and preemptively engage on creating such secondary solutions.??

Shimrit Nativ

Purpose & Prosperity Mentor ∞ Shimrit Nativ / Master your mind & create the life you desire / Create abundance in Biz & Life / Check the free resources in the link????

1 年

Thanks for sharing this, Martin

Michael Vandergriff

Author. Speaker. Trainer.

1 年

Huge development. Bigger questions. Good post!

Ade McCormack

Founder, The Intelligent Leadership Hub

1 年

Fascinating. Possibly this is the use case that blockchain / NFTs have been looking for? The providence of everything from tweets to research papers will need to be verifiable. AI content tools that sit outside this framework will be considered suspect. The question then becomes whether it is a governmental or business led initiative.

回复
Christoph Thomet, PhD

Chief Executive Officer | Chief Operating Officer | Member of Board of Directors | Regional Head | #ScaleUp | #DrivingGrowth | #Transformation | #ArtificialIntelligence | #D2C | #Retail | #Leadership

1 年
回复
Otti Vogt

Leadership for Good | Host Leaders For Humanity & Business For Humanity | Good Organisations Lab

1 年

That's a great point! Thanks Martin!

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了