ChatGPT: A Frankfurtian bullshitter
A large bull standing next to a pile of manure

ChatGPT: A Frankfurtian bullshitter

LLMs like ChatGPT are encroaching on human domains, such as passing exams. In old news, last year ChatGPT-4 passed an (emulated) US bar exam and achieved high SAT scores (link). So perhaps it's not unreasonable to describe ChatGPT and its competitors using human categories. Some say LLMs are creative, while others claim they hallucinate - both concepts that, until last year, were predominantly used to describe humans.

LLMs invent things—they just can't help it. Even with simple tasks, such as summarizing a text, errors can creep in. In a more complex task, like finding Danish authors for articles in the prestigious New England Journal of Medicine, ChatGPT produced entirely fictitious references to articles, which closely resembles lying.

However, that is not an expression of creativity, hallucination, or lying. Instead, it is bullshit, according to three authors from the University of Glasgow in the article "ChatGPT is bullshit." It is not common for researchers (or this writer, for that matter) to employ terms like bullshit. Instead, they refer to the concept as described in "Frankfurt, H. (2005). On Bullshit, Princeton" (a summary of the book is available on Wikipedia)

Here's their argument: Because LLMs are not concerned with the truthfulness of what they write but are instead designed to produce text that appears truthful, it seems appropriate to classify their output as bullshit.

Philosopher Harry G. Frankfurt, whom they reference, argues that bullshit is more harmful than lies because:

  • The bullshitter shows indifference to the truth,
  • Bullshit subtly erodes the distinction between truth and falsehood,
  • Bullshit delivers oversimplified descriptions of issues without the nuances of e.g., doubt.

These are valid points and bullshit in the Frankfurtian sense (or in any other sense, for that matter) is hardly something we need more of in 2024.

OpenAI is aware that LLMs are not exactly truthful and is fighting to correct this in their own ChatGPT. For example, ask about the dangers of COVID-19 vaccines, and it will respond that there is a lot of misinformation to be wary of. But corrections like this are exceptions and needs to be massaged into the LLM output.

So, if ChatGPT is a bullshitter, Frankfurtian or otherwise, what should we do about it?

I suggest taking the same precautions as with human bullshitters:

1) By all means, use ChatGPT and other LLMs to explore creative combinations or to do the initial groundwork to learn about a topic. A classic example is naming drinks, but it could also be to get a quick idea of of the Global Financial Crisis.

2) Use ChatGPT and other LLMs for simple tasks that take a long time to complete but are quick to verify as correct. Examples: rewriting a text with slightly different wordings, varying sentence lengths, and the like. Shortening texts. Adding commas. Translations. The mantra is "trust, but verify."

3) Finally, don't expect the truth from an LLM and be aware of what you are actually asking it to do. When I asked ChatGPT for a list of New England Journal of Medicine articles with Danish authors, I was essentially asking it to invent a list of articles that looked truthful. So, don’t rely on LLMs to gather facts for something important.

These are my recommendations as of mid-2024. However, be aware that the level of what we can expect from LLMs is continuously improving. We may hope for a bit more fact and a lot less bullshit.

Peter N?rregaard

Director of Architecture, Devoteam Management Consulting. Member of the council for Business & IT Alignment, DANSK IT

3 个月

I think that the bullshit hypothesis connects nicely to the concept of ‘cheap’ suggested by Gerben Wierda.

回复
Mikkel Frimer-Rasmussen

Generativ AI konsulent og foredragsholder

3 个月

Any evaluations on how well they do compared to human-like bullshit?

回复

Another thing; I’m already tired of everyone sounding exactly the same. No personality in language. Being authentic and yourself will be the new currency!!!

Peter N?rregaard

Director of Architecture, Devoteam Management Consulting. Member of the council for Business & IT Alignment, DANSK IT

3 个月
回复
Michael Davison

Senior Solution Architect at Databricks

3 个月

Frankfurtian is my new favourite word. I guessed that its meaning had to do with the quality of hot dogs, and was amused in a round about way to learn I was right.

要查看或添加评论,请登录

社区洞察