From stoned apes to stoned AI's - AI hallucinations
Hernan Chiosso, CSPO, SPHR ??
I use AI to help organizations conquer culture, people, product, process, and tech challenges. Fractional CHRO, HR Innovation Consultant, HRTech Product Manager, Remote work expert. productizehr.substack.com
As more people start to play with chatGPT and other similar tools, and figure out how to apply it to their everyday jobs, because "AI will not take away your job, but a person using AI will", they are sometimes experimenting weird, disconcerting outputs.
Like this post where chatGPT simultaneously knows and doesn't know about Elon Musk acquiring Twitter, or the at times hilarious fireside chat with Reid Hoffman where the AI completely makes up some responses and delivers them with utmost authority.
In a recent post by Burcin Kaplanoglu, he discusses survival bias in how people evaluate the responses they get from chatGPT: we tend to discard and forget weird answers and focus on how awesome AI is at generating content. That can be dangerous.
Of course, OpenAI seems to be aware of it and is already working on solving the problem. And other companies are also trying to find ways to prevent this from reducing confidence in AI responses.
领英推荐
This other article by Lak Lakshamanan describes why large language models are bullsh*t artists, together with a few examples of things you could use it for, and things you shouldn't (hint: asking ChatGPT to write a whole article is a bit of a stretch and can have negative SEO consequences with Google, but you can use it as part of a creative exercise, to outline content, etc).
In their current state of development, these are GREAT tools to assist your content creation efforts and get you unstuck. But they are not a reliable replacement for human intelligence and our ability to infer and create meaning (when we pay attention to what we are doing, that is).
And this brings me back to the title of this short article: the whole discussion about AI hallucinations made me think of the Stoned Ape theory, which suggests that magic mushrooms triggered human cognitive evolution.
What do you think? Are AI hallucinations a step in AI becoming self-aware? Or just bugs in the code?