Who's keeping an eye on the parrot?
Image generated by Open AI's DALL-E-2

Who's keeping an eye on the parrot?

ChatGPT has been an incredible success by any measure you can come up with. In January 2023, ChatGPT crossed 100 million users (within 2 months of its launch), making it the fastest-growing consumer application to date. It's been used for diverse applications like writing articles, answering homework questions and, of course, just answering random questions.

There has also been an undercurrent of more negative attention – firstly from those who seem to fear a SkyNet style rise of the machines (which I've chuckled about in a previous post) and secondly from those cautioning about its accuracy or lack thereof.

?It's that second point that we all should be shouting about more. As an LLM (large language model), ChatGPT doesn't think or reason when you ask it a question.? Although it is trained on a lot of publicly available data it doesn't forensically search that data or the web to find an answer. Instead it statistically predicts the likeliest response by combining sequences of words it has seen in its training data in a plausible sounding way based on probable combinations of those words.

That's why everything it says sounds so plausible, but that’s also why content it provides can’t be trusted on its own.?As I said before, it’s an incredibly sophisticated parrot, but one with even less awareness and understanding of the real world. Like a parrot it fundamentally doesn’t understand the words it says or the questions it is asked.

The scary part of all this, as Open AI's CEO Sam Altman openly acknowledges is the “hallucinations problem”.?“The model will confidently state things as if they were facts that are entirely made up.”

This statement is true but also incomplete. All the things the model says are made up. The proabilistic approach to the responses it generates means that some of them will be true but not because ChatGPT knows those things, only that the likeliest statistical response to some questions coincidentally happens to be true. After all, as the old adage goes, a broken clock is right twice a day.

This might not sound that serious, but if 100 million people start using ChatGPT to get their news or look up information, thinking that it’s accurate, then it gets very serious very quickly.?Check out this case, where ChatGPT was asked for a list of of legal scholars who had sexually harassed someone and got very creative, even fabricating plausible sounding primary sources to back itself up.

?https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/

So should we take a leaf out of Italy’s book and ban ChatGPT (and Bard for that matter) altogether?

No.?As with any source of information we just need to be aware that it can’t be trusted on its own and we always need to verify information it provides with another source.?Most people are aware that not everything posted on Twitter or put into Wikipedia is 100% accurate, that newspapers and TV news channels put their own spin on stories with the way headlines are worded and by selectively editing interviews, that politicians may make false or merely deliberately misleading statements. ??

We just all need to be crystal clear that ChatGPT, despite all the hype and histrionics, is not sentient – it is not a thinking, reasoning being.?It’s just a parrot that’s so good at what it does that people project human-like reasoning on to it.?

The issue isn’t so much the parrot itself as the people who choose to listen to the parrot and the people who present it as more than it really is, even though they know it’s actually a parrot.?That’s where the real danger lies and we should all be vigilant.

Louise Rogers (was Burnham)

Product Manager | Business Analyst | Data Analyst | Solution Architect

9 个月

Great explanation. I’ve just started to look at the mathematics used for the models. As you allude, it’s not real AI in the sense of sentiment beings, but really exciting given the compute power we now have today. I’m really looking forward to seeing how the new advances can be used to help manage health conditions more effectively.

John Chaney

Technology Strategist, Asset Intelligence and Cybersecurity Consultant, AI Specialist, Database Consultant, Public Speaker, Sales Educator, channel development

1 年

This is a great overview of ChatGPT, well done!

回复
Thomas K.

Connecting Sino - Dutch - Belgian and Luxembourg businesses | Leading AI advocate | CX | Customer Service

1 年

Thank you for this metaphore, a parrot, awesome! Have been struggling to find a good one, I'll quote you when using it Anne! ??

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了