Trust AI, Not One Another

A recent experiment found that an AI chatbot could fare significantly better at convincing people that their nuttiest conspiracy theories might be wrong.

This is good news, and it’s bad news.

The AIs were able to reduce participants’ beliefs in inane theories about aliens, the Illuminati, and other nutjob stories relating to politics and the pandemic. Granted, they didn’t cure them of their afflictions — the study reduced those beliefs “…by 20% on average…” — but even a short step toward sanity should be considered a huge win.

For those of us who’ve tried to talk someone off their ledge of nutty confusion and achieved nothing but a pervasive sense that our species is doomed, the success of the AI is nothing shy of a miracle.

The researchers credit the chatbot’s ability to empathsize and converse politely, as well as its ability to access vast amounts of information in response to whatever data the conspiracists might have shared.

They also said that the test subjects trusted AI (even if they claim not to trust it overall, their interactions proved to be exceptions).

Which brings us to the bad news.

Trust is a central if not the attribute that informs every aspect of our lives. At its core is our ability to believe one another, whether a neighbor, politican, scientist, or business leader and which, in turn, has as a primary driver our willingness to see that those others are more similar than not to us.

We can and should confirm with facts that our trust in others is warranted, but if we have no a priori confidence that they operate by the same rules and desires (and suffer the same imperfections) as we do, no amount of details will suffice.

Ultimately, trust isn’t earned, it’s bestowed.

Once we’ve lost our ability or willingness to grant it, our ability to judge what’s real and what’s not goes out the window, too, as we cast about for a substitute for what we no longer believe is true. And it’s a fool’s errand, since we can’t look outside of ourselves to replace what we’ve lost internally (or what we believe motivates others).

Not surprisingly, we increasingly don’t trust one another anymore. We saw it vividly during the pandemic when people turned against one another, but the malaise has been consistent and broad.

Just about a third of us believe that scientists act in the public’s best interests. Trust in government is near a half-century low (The Economist reports that Americans’ trust in our institutions has collapsed). A “trust gap” has emerged between business leaders and their employees, consumers, and other stakeholders.

Enter AI.

You’ve probably heard about the importance of trust in adopting smart tech. After all, who’s going to let a car drive itself if it can’t be trusted to do so responsibly and reliably? Ditto for letting AIs make stock trades, pen legal briefs, write homework assignments, or make promising romantic matches.

We’ve been conditioned to assume that such trust is achievable, and many of us already grant in certan cases under the assumption, perhaps unconscious, that technology doesn’t have biases, ulterior motives, or show up for work with a hangover or bad attitude.

Trust is a matter of proper coding. We can be confident that AI can be more trustworthy than people.

Only this isn’t true. No amount of regulation can ensure that AIs won’t exhibit some bias of its makers, nor that they won’t develop their own warped opinions (when AIs make shit up, we call it “hallucinating” instead of lying). We’ve already seen AIs come up with their own intentions and find devious ways to accomplish their goals.

The premise that an AI would make the “right” decisions in even the most complex and challenging moments is not based in fact but rather in belief, starting with the premise that everybody impacted by such decisions could agree on what “right” even means.

No, our trust in what AI can become is inexorably linked to our distrust in who we already are. One is a substitute for the other.

We bestow that faith because of our misconception that it has or will earn it. Our belief is helped along by a loud chorus of promoters that feeds the sentiment that even though it will never be perfect, we should trust (or ignore) its shortcomings instead of accepting and living with our own.

Sounds like a conspiracy to me. Who or what is going to talk us out of it?

[9/17/24 UPDATE] Here’s a brief description of a world in which we rely on AI because we can’t trust ourselves or one another.

[This essay appeared originally at Spiritual Telegraph]

Suchitra Reddy

Empowering professional women entrepreneurs who, despite their outward success, feel stuck, overwhelmed, and isolated to unlock their potential and overcome isolation and self-doubt, leading to a lifetime of happiness.

2 个月

This is a hot topic right now! Brilliantly expressed. I agree with you that AI cannot be unbiased and that's where our discernment comes in. I sincerely hope that people will continue using their own discernment and discretion while using AI to help them, rather than allowing AI to take over completely.

Scott McLaughlin

Oh, the places you'll GROW ?? | Leading Startups since 1994 | ?? Reach & ?? ROI | "The Hunger Map Guy" | Expert in Human Behaviour, Business Intelligence ?? & Influence | Driving Growth, Transformation & Impact ??

2 个月
回复
Scott McLaughlin

Oh, the places you'll GROW ?? | Leading Startups since 1994 | ?? Reach & ?? ROI | "The Hunger Map Guy" | Expert in Human Behaviour, Business Intelligence ?? & Influence | Driving Growth, Transformation & Impact ??

2 个月

This is a brilliant and timely observation that unsurprisingly from you Jonathan is grounded in truth. Utterly brilliant my man! Too long between chats - would love to catch up soon mate.

Dr Jillian Ney

Building THE global social listening community to turn social intelligence into a recognized discipline

2 个月

AI and robotics are making tasks more efficient, but we also need to ensure they align with human values. Jonathan Salem Baskin

要查看或添加评论,请登录