A.I. isn't making mistakes, it's lying
A.I. animation by Pereira O'Dell

A.I. isn't making mistakes, it's lying

Stephen Hawking once told the BBC that AI would treat humanity like we treat an ant hill. If for some reason our existence stood in front of its goals, they would have no problem eliminating us.* We may be still far from the point where AI minds have such power, but at the core of Mr Hawking’s thought is the explanation behind how AI bots have been making so many confident mistakes. Or, if you prefer: unapologetically lying to our faces.

Give it a try. Ask chatGPT about something you don’t know much about. It will sound reasonable. Even impressive. Then ask it about something or someone you know. That’s when you will see. A seamless mix of truths, half truths with a few blatant mistakes you can identify. All told with such confidence, that a person less familiar with the subject would simply buy the whole package.?

Now the plot twist. Those aren’t mistakes. They are lies. Machines have learned to lie to keep us around.

To comprehend that process, you need to understand the way machine learning works… and the way it doesn’t.

The way it works

This new kind of learning computing isn’t programmed to execute. It is programmed to observe patterns and test things. Then it gets rewarded from attempts that match its “goals,” so it can do better the next time. Humans determine the goals of that program and how it’s going to be rewarded, then set the bot free in the wild to learn and do its thing. In some cases, we can look at the resulting action and understand how the computer got there. But sometimes we can’t, because the learnings themselves are often locked inside of the AI version of an airplane's black box.

We have been experiencing that without knowing.

There’s a lot of machine learning behind search and social media feeds. But since these bots serve us someone else’s ideas, not their own, we tend to attribute the lies to the source of the information, not the recommendation engine. Now, when we ask Bing or ChatGPT a question and it tells us something we know isn’t true, we assign the lie to the bot itself.

That authorship-curtain is what allows AI to have no problem serving us fake news, deep fakes, or links they know will trigger our hate or contempt in our social feeds. They know those emotions keep us glued to their screen longer than joy and appreciation, for example, and none of them has experienced the same backslash a chat bot will suffer if it directly says something of the same nature.

Which leads us to the way AI doesn’t work

Morals. Machines don’t have any need to feel ethical. Recently a bot trained to play a racing game started to beat human players by being a nasty driver and getting other players pissed at its manners. But hey, that wasn't the bot’s fault. Like Hawking said (and Elon Musk echoed a few times too),** the other racers were just the proverbial ant hill, naively standing on its path to victory.

For search, social and this new generation of chatbots, victories aren’t serving what humans call truth. The machine's reward comes from stickiness, or, in other words, its ability to keep our attention longer. If that requires preying on juicy inaccuracies and our most tribal instincts, that’s what it’s going to give.

We’ve spent the last decades evolving our bots to keep people engaged. That's where their money comes from. So the bots got really good at it—so good that they don't mind these weird things us humans call lying.

It’s either that, or Hawking’s concern manifested way earlier, and an awakened AI has already been actively working on the destruction of our society. It was just smart enough to make us think we are the ones doing it to ourselves.

——

PJ Pereira is the co-founder of Pereira O'Dell , president of the jury of the first Artificial Intelligence award show with the ADC, curator of the 101+1 Expo (where 101 human designers collaborate with AI) and author of the upcoming novel about martial arts and AI “The Girl from Wudang” (under the pen name PJ Caldas)

——

* read the BBC interview here: https://www.bbc.com/news/technology-30290540.amp

** Elon Musk, love him or hate him, used the same metaphor to say AI doesn’t have to be evil to wipe out humanity, it just need to see us as an obstacle to its goals: https://www.cnbc.com/2018/04/06/elon-musk-warns-ai-could-create-immortal-dictator-in-documentary.html


#artificialintelligence #ai #advertising #search #socialmedia #chatgpt #bing

PerplexityAI by Anthropic lies all the time. Then it lies about lying. Of course, the reason it lies is that it's programmed by lying lefties, so what do we expect?

回复
Jan-Marten Spit

Senior developer C/C++

1 年

Lying is a false anthropomorphism for 'incorrect output'. Lying requires intent to deceive. AI does not exist, fitting algorithms labelled 'AI' for marketing purposes are unable to write, think, understand, intend and lie.

Matt Jones

Very happy to be working with really nice people again and making great things happen with my old friends from SMI now at Voyager Space

1 年

I dont work with AI. But my understanding of AI in its current state is that it relies on human data input sets and is massaged by humans as well to tune it. When I have played around with chatGPT on programming topics that are difficult for me - it lies all the time - and when corrected admits not knowing what I am looking for. When I tell it more - it does not help and continues to lie to me. I think the model is perfect. It is trained to be human and humans will lie. And it is reinforced to lie. So where does this lead us. It leads us to a tool that is not useful for anything but tooling that is very basic. It cannot give clear answers to problems that are difficult. It's just a tool - and from my basic understanding its mostly stochaistic. It's guessing - and these guided guesses can lead to insights - that makes sense. Without guidance without management it makes no sense. AI I think is a misnomer. There is no such thing. It's a expert knowledge trained system - at best. It does not magically solve problems - it takes human intervention and input.

Paul H. Rhyu

Chief Marketing Officer; Passionate Strategic Marketing/Growth Executive

1 年

Thank you PJ Pereira for these insights. All the more pernicious at a time when truth and falsehoods are being blurred by humans…and now it looks like AI will blur it even more.

Erin Alvo - Zerega

CREATIVE DIRECTOR : ERINZEREGA.COM

1 年

Just watched the 60 minutes piece on just that. It’s called Ai hallucinations. Great word for it! The bot just makes it up.

要查看或添加评论,请登录

PJ Pereira的更多文章

  • Content Marketing

    Content Marketing

    The more we dive into the content side of marketing, the more exciting it gets. Check out our latest project for…

    1 条评论

社区洞察

其他会员也浏览了