The Golden Rule Of ChatGPT
Dave on stage at Mobile World Congress 2023

The Golden Rule Of ChatGPT

My brother-in-law is a lawyer. He told me that the golden rule that he learned at law school was that when you are cross-examining a witness, you should

never ask a question that you do not already know the answer to.

Well, it seems to me that the maxim applies to ChatGPT. Follow that golden rule, and you will find ChatGPT and its ilk very useful. Ignore it at your peril.

ChatGPT is a Large Language Model (LLM), a form of generative AI. Unless you have been in a coma for the last few months, you cannot fail to have noticed just how rapidly it has become part of the mainstream discourse in fintech and other sectors. And it is, let’s not beat about the bush, astonishing. Which is why Microsoft have invested billions into OpenAI, ChatGPT’s developer, and why Google launched Bard, a similar service based on a similar model.

While ChatGPT’s output is amazing, it is a mistake to think that it is intelligent. Just to be clear: ChatGPT doesn’t know what it is talking about. If you ask it to produce a six point plan to bring older workers back into employment post-pandemic, it can produce text that could have been lifted wholesale from an expensive report from a top team of management consultants:

  1. Call for government incentives to employers who hire older workers.
  2. Encourage retraining and education opportunities for older workers.
  3. Introduce workplace accommodations that are beneficial to older workers.
  4. Promote positive attitudes towards older workers in the workplace.
  5. Stimulate the development of age-friendly workplaces.
  6. Advocate for government policies that support the employment of older workers.

Not bad. And certainly good enough to create an agenda item for a board meeting or respond to a request for a press interview. If you want real expertise though, ChatGPT might not be your best friend. Arvind Narayanan, a computer science professor at Princeton, wrote on Twitter in December that he had asked ChatGPT some basic questions about information security that he had posed to students in an exam. The chatbot responded with answers that sounded plausible but were actually nonsense and, as he pointed out in the New York Times (in adherence to my golden rule), that was very dangerous because

“you can’t tell when it’s wrong unless you already know the answer”.

(Note also that ChatGPT’s model was trained on data up until 2021, so it is limited in how it responds to questions about current events.)

When ChatGPT was set specific tasks around computer programming, it ended up getting banned for “constantly giving wrong answers”. The Stack Overflow moderators wrote that a particular problem is that while the answers it produces have a high error rate, they typically look like they might be good and, in support of my golden rule that people look to ChatGPT to create answers, without the “expertise or willingness” to verify that the answer is correct.

This is not a fault with ChatGPT. It is how these models work. Hence Google wiped billions from the value of Alphabet (their stock slid nine per cent during trading) after it released promotional material for Bard which contained an error. Bard said that the James Webb space telescope (JWST) took the very first pictures of a planet outside the Earth’s solar system. But that is wrong. Just plain wrong. Bruce Macintosh, the director of University of California Observatories, tweeted: “Speaking as someone who imaged an exoplanet 14 years before JWST was launched, it feels like you should find a better example?”?

No alt text provided for this image
Questions and answers.? HELEN HOLMES (2023).

The philosopher Harry Frankfurt defined “bullshit” as speech that is intended to persuade without regard for the truth. In that sense, ChatGPT and Bard and so on are the greatest bullshitters ever! Such models produce plausible text but not true statements, since they cannot evaluate what is true or not. That is not their purpose. They are not Einsteins or Wittgensteins. They do not know anything, they have no insight and they deliver “hallucination” rather than illumination.

But they are not useless because of this. Far from it, in fact. When you know what the answer is but need some help writing it up, ChatGPT is a godsend to authors and it saves huge amounts of time by pulling together draft paragraphs or sections of text on things that you know about, while you can remain focused on the narrative thread and investigating the things that you don’t know about.

While (thank goodness!) such models are a long way from putting people like me out of work, they are already helping me to be more productive, and that’s a big benefit already.


Book Dave

Are you looking for:

  • A speaker/moderator for your online or in person event?
  • Written content or contribution for your publication?
  • A trusted advisor for your company’s board?
  • Some comment on the latest digital financial services news/media?


Get in touch by clicking on the image above


LLOYD ADAMS ???? ????

Get in contact to plan and achieve your financial future - Jersey

1 年
回复
Julian Hall FRSA BCAe

Freeman of the City of London | Helping Kidpreneurs & Young Entrepreneurs to #DoWhatYouLove #DoWellDoGood | Entrepreneurship Education for 7-18 yr Olds | Founder of Ultra Education CIC, StartupDashGame.com and YoBuDi.com

1 年

Hi David, very interesting! I literally just published a video about #chatgpt would love your feedback ???? https://youtu.be/hqP-nLB0kMQ

回复
Andrei Charniauski

Chief Research Officer & Head of Awards

1 年

Same experience here. Tried it on factual questions like "what colour is sky" and it works well. Went further with knowledge-based questions - and it's mostly non-sensible. It failed to answer many questions I know the answer to - which does not let me trust it.

Paul Amery

Writer and editor

1 年

“you can’t tell when it’s wrong unless you already know the answer” But you can tell it's going to eat up a lot of energy in generating the wrong answer.

Achille C.

Director of Engineering

1 年

I am not shocked by the ChatGPT's lack of ability to consistently give correct answers. I don't think that's what it's designed for. If it was able to provide correct answers too many times, it would probably fail the Turing test. Being error prone makes it probably more realistic, if that's what we want.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了