Don't Fall In Love With Your AI

You’re probably going to break up with your smart assistant because your future life partner has just arrived.

OpenAI’s new ChatGPT comes with a lifelike voice mode that can talk as naturally and fast as a human, throw out the occasional “umms” and “ahhs” for effect, and read people’s emotions from selfies.

The company says the new tech comes with “novel risks” that could negatively impact “healthy relationships” because users get emotionally attached to their AIs. According to The Hill:

“While these instances appear benign, they signal a need for continued investigation into how these effects might manifest over longer periods of time.”

Where will that investigation take place? Your life. And, by the time there’s any conclusive evidence of benefit or harm, it’ll be too late to do anything about it.

This is cool and frightening stuff.

For those of us I/O nerds, the challenge of interacting with machines is a never-ending process of finding easier, faster, and more accurate ways to get data into devices, get it processed into something usable, and then push it out so that folks can use it.

Having spent far too many hours waiting for furniture-sized computers to batch process my punchcards, the promise of using voice interaction to break down the barriers between man and machine is thrilling. The idea that a smart device could anticipate my needs and intentions is even more amazing.

It’s also totally scary.

The key word in OpenAI’s promising and threatening announcement (they do it all the time, BTW) is dependence, as The Hill quotes:

“[The tech can create] both a compelling product experience and the potential for over-reliance and dependence.”

Centuries of empirical data on drug use proves that making AI better and easier to use is going to get it used more often and make it harder to stop using. There’s no need for “continued investigation.” A ChatGPT that listens and talks like your new best friend has been designed to be addictive.

Dependence isn’t a bug, it’s a feature.

About the same time OpenAI announced its new talking AI, JPMorgan Chase rolled out out a generative AI assistant to “tens of thousands of its employees” as “more than 60,000 employees” are already using it.

You can imagine that JPMorgan Chase isn’t the only company embracing the tech, or that it won’t benefit from using its most articulate versions.

Just think…an I/O that enalbes us to feed our AI friends more data and rely on them more often to do things for us until we can’t function without them…or until they have learned enough to function without us.

Falling in love with your AI may well break your heart.

[This essay appeared originally at Spiritual Telegraph]

Tom Asacker

Creator of Want Consciousness? Trusted advisor to influential leaders. Author of "Unwinding Want: Using Your Mind to Escape Your Thoughts" Learn more at: UnwindingWant.com

3 个月

We live in a world where there is more and more simulacrum, and less and less life.

要查看或添加评论,请登录

Jonathan Salem Baskin的更多文章

  • AI is a Tulip Crossed With An Edsel?

    AI is a Tulip Crossed With An Edsel?

    Assume for a moment that every naysayer is exactly right, and AI is the biggest, dumbest economic and social bubble in…

  • AI & The Tradition of Regret

    AI & The Tradition of Regret

    AI researcher Geoffrey Hinton won a Nobel Prize earlier this month for his work pioneering the neural networks that…

  • Bigger AIs Aren't Better AIs

    Bigger AIs Aren't Better AIs

    Turns out that when large language models (“LLMS”) get larger, they get better at certain tasks and worse on others…

    1 条评论
  • AI And The Dancing Mushroom

    AI And The Dancing Mushroom

    It sounds like the title of a Roald Dahl story, but researchers have devised a robot that moves in response to the…

  • California Just Folded On Regulating AI

    California Just Folded On Regulating AI

    California’s governor Gavin Newsom has vetoed the nation’s most thoughtful and comprehensive AI safety bill, opting…

    2 条评论
  • AI's Kobayashi Maru

    AI's Kobayashi Maru

    Imagine a no-win situation in which you must pick the least worst option. It’s the premise of a training exercise…

  • Trust AI, Not One Another

    Trust AI, Not One Another

    A recent experiment found that an AI chatbot could fare significantly better at convincing people that their nuttiest…

    5 条评论
  • The Head Fake of AI Regulation

    The Head Fake of AI Regulation

    There’s lots going on with AI regulation. The EU AI Act went live last month, the US, UK, and EU will sign-on to a…

    1 条评论
  • Meet The New AI Boss

    Meet The New AI Boss

    Since LLMs are only as good as the data on which they’re based, it should be no surprise that they can function…

  • Prove You're Not An AI

    Prove You're Not An AI

    A group of AI research luminaries has declared the need for tools that distinguish human users from artificial ones…

    2 条评论

社区洞察

其他会员也浏览了