What a Carry On! When AI melts down...
The Godfather of Generative AI: who knew?

What a Carry On! When AI melts down...

In case you missed it, ChatGPT had a spectacular meltdown this week - and it was quite beautiful. In response to prompts, it began to hallucinate wildly.

For example, one user requested items for a Meeting, and this is what it replied. If you have a moment, read to the end:

A godemar in the wowing?

One reader suggested it had learned Polari, the gay back slang which was introduced to a broad audience via BBC Radio, by the late, great Kenneth Williams. It seems to have ingested LinkedIn jargon, and thrown it all back up again.

A few more thought it had been replaced by the ghost of Professor Stanley Unwin, who brought a lot of joy to audiences with his mutated language.

This was happening all over the world. Here's another example:


This is a well known feature of neural networks. They "hallucinate".

It is not a bug that can be ironed out, or tweaked, or solved with bigger models. It is an inherent feature of the system. The LLM can't say "I'm so sorry, I don't know", because it doesn't know what it doesn't know. So it makes stuff up instead. Large language models are an instance applied statistics that is unable to police itself.

Of course, this technology can still be very useful, where reliability is not so important. For example, if an LLM is cleaning up an old movie clip (upscaling and removing glitches), and introduces a new artefact, the output is still much cleaner than before, we don't mind the new glitch so much.

But if the error has serious implications - if people rely on the output - it is a problem. If it drops Grandpa, Grandpa will notice.

So I was encouraged by Kathryn Parsons' survey of corporate CIOs which she related in an article titled "AI mania is cooling as Silicon Valley plays the long game". She notes:

Some leaders are weary of constant change. “Not another digital transformation,” said one. Others have more important issues on their agenda. “AI does not feature in conversations. It is at the edges, at best,” said another. Core business models, inflation, supply chains, geopolitical issues, fraud and other issues all feature higher on the board agenda. For some, their data is “dirty”, in other words, it simply isn’t able to be used in any meaningful way. However, the main reason leaders were wary of embracing generative AI were the ethical, legal, and regulation risks presented by the technology.

This is commendably realistic - and a world away from the feverish self-promoters here on LinkedIn.

My personal position on generative AI remains the same: no one would be happier if it worked reliably than me. If it did what it says on the tin. If heuristic software - and lets drop any pretence now that it's "Intelligence" - could be trained to perform useful tasks, such synthesising and condensing research data.

But it can't. It's not the AI were were promised, even five years ago.

Many people who yoke their careers or personal brands to capitalising on "waves of innovation" (via the media of newsletters or webinars) are desperately hoping it produces great benefits. This desperation is very palpable. But unlike them, I don't have skin in the game. As I like to point out, this unreliable generative software may well turn out to be the last gasp of a previous era, the Everything Bubble, not the first wave of a new era.

And I take a perverse pleasure from Kenneth Williams having the last word on what was supposed to be the "Fourth Industrial Revolution". What a carry on, indeed.

[The author has no financial interest in AI companies - and no newsletter to sell you]

Paul Evans

FRSA. Representing Directors in the UK film & TV industry. ENFP (allegedly).

6 个月

True fact: our future overlords built a Time Machine and sent a lyricist back to the past armed with ChatGPT to generate lyrics known as ChatMES.

回复
Timothy Nice

Sr. Director @ LightRiver | Product Designer & App Creator | Embracing AI, FlutterFlow, and NoCode Tools | Offering Tech Tips, Tools and Growth Resources

6 个月

Got to keep the human in the loop and not think of AI as a magic do everything machine.

回复

Jabberwocky ;-) Read a bit too much Lewis Carroll lately, perhaps?

回复
Antony Slumbers

Creator of the #GenerativeAIforRealEstatePeople Course | Master Generative AI in Real Estate: antonyslumbers.com/course | AI won’t take your job—someone using AI will. @genaiforrealestate on Instagram

7 个月

Still makes more sense than most of the BS in the Telegraph. Which is one long hallucination.

GRAEME BELL

Computing PhD. Expert in SQL, GIS, AI, Security, Programming. Open to new connections ??

7 个月

What happens if a customer support service, or in-game characters, or medical chat service, are using APIs like this and they start going crazy? You have no control over when these 'updates' happen. But your company can be held liable for any nonsense produced - see the recent Air Canada case. Have companies fully costed out all the liability risks and reputational damage risks they're taking on?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了