Should You Trust GPT-4?

Should You Trust GPT-4?

You already trust artificial intelligence.

We trust artificial intelligence to curate our social media feeds, our search results, and even our movement. Delegating these choices has become so intuitive, we seldom question the results. If you think you’re the exception, ask yourself:

  • How often do you search for the answers Google doesn’t give you?
  • How often do you see the posts LinkedIn doesn’t choose to show you?
  • How often do you take the route Maps doesn’t recommend?

Research suggests the average person can’t reliably recognize AI, much less recognize when AI is making choices for them. In 2017,?Pegasystems?asked 6,000 consumers across six countries: “Have you ever interacted with Artificial Intelligence technology?” Eighty-four percent had interacted with AI, based on the devices and services they reported using. Yet only 34 percent responded, “Yes.”?

Uh-oh.

On the bright side, our trust in AI has yielded tangible benefits for people, animals, and the planet we call home.?Nature conservationists have used AI to?monitor endangered species,?combat the illegal wildlife trade, and?detect wildfires. Healthcare practitioners have used AI to?anticipate public health emergencies,?accelerate the diagnostic process, and?develop drugs.

AI processes information faster than us, so it makes decisions faster than us. Sometimes, speed is a major factor in success, and trusting AI is the right choice. But more often, trusting AI is just convenient. It's tempting, like 'one more' coffee on a slow morning. You wonder, Why not let AI handle the ‘blah’ parts of life?

Why not, indeed.

Should you trust GPT-4?

GPT-4 is the latest in a series of language models developed by OpenAI. Language models use AI to predict the probability that any given sequence of tokens (basic language units) is the 'right' response to a query.?GPT-4 makes very accurate predictions very often, so its responses can seem eerily human.

GPT-4 is undeniably impressive. Put to the test, it scored above the 80th percentile on the SATs, GREs, and LSAT.?It can hold a conversation, interpret images, write text, and code in multiple languages. The?New York Times?reports, “It’s close to telling jokes that are almost funny.” But there is a downside.

GPT-4 has the tendency to 'hallucinate', or produce content that is nonsensical or untruthful,” cautions OpenAI.?Hallucination or, more simply, 'making stuff up' was also a problem for GPT-4’s predecessors. Chat-GPT, released several months prior to GPT-4, was shown to produce factually incorrect responses to user queries about 20 percent of the time.

On the one hand, trusting GPT-4 ?is convenient. Early adopters have used GPT-4 and its predecessors to choose?how they work out,?what they eat, and the contents of their?emails?and?college applications, for example.

On the other hand, trusting GPT-4 is risky.

Imagine you’re an office worker who hates writing emails. One day, you’re assigned an intern: “GPT-4.” Naturally, you put them to work writing?your?emails. You proofread the first ten emails. Finding no issues, you skim the next ten. Eventually, you get comfortable enough to send emails without checking them. It's convenient — until the day your boss calls you, livid about an email you sent. You realize with a sinking feeling that you have no idea what it says.

Trusting GPT-4 is putting your faith in probability. If you're willing to accept the odds, go ahead. But we recommend that you write your own emails.

要查看或添加评论,请登录

Synaptiq的更多文章

社区洞察

其他会员也浏览了