Just like us
alien intelligence, from this or other planets (AI generated image by the author)

Just like us

why treating AI agents like humans is wrong but may actually be useful


alien intelligence

Are we alone in this incredibly vast universe? We haven't figured out the answer yet, have we?

This question has puzzled us for thousands of years. In a space so vast that we can hardly comprehend its size, how is it possible that we humans are the only intelligent creatures? Many people believe it's improbable, if not impossible, that we're alone. We just haven't figured out how to find other intelligent civilizations beyond our little rock.

Consider the amount of fiction we've produced over the past 50 years and the attention these works have received, as evidenced by box office success. Here are some references:

  • Avatar (2009) - Approximately $3.957 billion
  • E.T. (1982) - Approximately $2.917 billion (adjusted for inflation)
  • Star Wars (1977) - Approximately $3.563 billion (adjusted for inflation)

The other directions the human fascination with alien intelligence has been explored in the past 50 years is by building it: if we can't find strong enough evidence of alien intelligence, perhaps we can study our own and attempt to replicate it artificially.

It's fascinating how scientific discovery progresses non-linearly. For years, hundreds of people have conducted experiments to understand how our brains work and to build computing models that simulate learning and intelligence. Yet, few outside research circles are aware of these efforts.

Then, a breakthrough happened. Researchers experimented with the transformers neural network architecture, training it on vast data sets, and stumbled upon a promising technique. Another smart group (OpenAI) packaged this solution in a user-friendly chat interface, leading to the AI gold rush we see today:

  • Microsoft: Combined investment in OpenAI, Mistral and AI infrastructure might total close to $70B by the end of 2024.
  • Meta: Plans to invest between $35B and $40B by the end of 2024.
  • Google and Amazon: Invested around $27B in AI in 2023 alone.

The motivation for these investments obviously goes well beyond the desire to build alien (artificial) intelligence but the connection still stand.

The fascination doesn't stop with chatbots and copilots. Companies are solving computational efficiency and mechanical engineering problems to create intelligent robots. Imagine giving AI the one thing it lacks to become more human: interaction with the physical world. Welcome to the world of embodied artificial agents.

Soon, if you're looking for alien intelligence, you won't need to look at the sky for UFOs but in your basement for the robot doing your laundry!

Boston Dynamics has been working on these challenges for years, but their PR isn't as bold as Elon Musk's. Musk recently claimed Tesla could (one day) sell 20 billion Optimus robots. Admire his vision-setting skills, but without a set date, why not? However, we might not need to wait until then. Tesla has about 4 million cars on the streets, many of which are upgradable to Full Self-Driving (FSD), effectively turning them into specialized, albeit driving, AI agents much sooner than you may think.

anthropomorphizing

It's important to highlight that an AI agent performing tasks reserved for humans doesn't make it intelligent.

The question of what intelligence is goes much deeper than we can address in a casual article, but we should consider this: assuming we will interact with more AI agents daily, how should we treat them? What is the right way to address them?

You might find yourself anthropomorphizing AI: "I asked this and that and he or she answered this!" or asking the chatbot "What do you think about XYZ?"

Anthropomorphizing means projecting human-like qualities onto AI, even though it doesn't possess them. This can create several risks:

  1. False Sense of Attachment and Trust: Vulnerable groups like children might form bonds with AI, influencing them in ways that could impact their ability to form trusting relationships with humans.
  2. Manipulation: Malicious actors could exploit the trust built with AI agents, making it easier to manipulate individuals.
  3. Complex Responsibility and Accountability: In corporate environments, the boundary between trust and responsibility becomes difficult to navigate if we anthropomorphize AI agents.

This is a complex topic better explored through dialogue. For more on anthropomorphism in AI, consider these articles:

But what does it mean "it may actually be useful" I put on the subtitle?

one useful thing

Ethan Mollick's newsletter, ["One Useful Thing"], offers practical recommendations for adopting AI. Mollick's latest book, [Co-Intelligence: Living and Working with AI], suggests interacting with AI chatbots as you would with a colleague. This approach is an interesting "hack" because AI can be brilliant and very dumb at the same time. And making mistakes, just like humans.

Current AI systems can generate text, images, videos, audio, and music, but they aren't infallible. Assuming AI is more human-like than superintelligent helps lower, rather than increase, trust in these tools. Recognizing that AI can fail encourages caution in taking AI-generated content at face value.

Mollick also reminds us that AI chatbots are designed to please the user. They aim to make us happy rather than provide accurate information, often making things up rather than admitting uncertainty. Keeping this in mind helps avoid taking AI responses at face value.

If Seneca was right when he said "Errare humanum est" (to err is human), then I would say current AI is very human.

要查看或添加评论,请登录

Carmelo Iaria的更多文章

  • The Mechanisms of Learning

    The Mechanisms of Learning

    Bridging Human Cognition and Machine Intelligence Learning is the cornerstone of progress for both humans and machines.…

  • Service-as-Software

    Service-as-Software

    The Agentic AI Revolution In 2011 Marc Andreessen, the venture capitalist and co-founder of Andreessen Horowitz…

    3 条评论
  • The new AI Worker

    The new AI Worker

    Get ready for significant adjustments in the labor market Imagine you are one of the 70 passengers on board of Richard…

  • Here is how we put the AI Skills for Business Framework to work

    Here is how we put the AI Skills for Business Framework to work

    A simple first step in the long journey of AI proficiency in enterprises Let's start with some numbers. The world's…

  • Icarus and the imitation game

    Icarus and the imitation game

    (gut reactions to GPT-4o release) Icarus Recall the myth of Icarus? Is a classic tale from Greek mythology that serves…

  • Como a IA está mudando as contrata??es (de uma forma que você n?o imaginaria)

    Como a IA está mudando as contrata??es (de uma forma que você n?o imaginaria)

    Um estudo recente mostra que a IA faz com que os empregadores valorizem mais as habilidades interpessoais. Neste artigo…

    1 条评论
  • Corporate Universities in the era of AI

    Corporate Universities in the era of AI

    Last week I was invited to present my views on how AI will impact Corporate Universities by the Global Council of…

    1 条评论
  • Learner Engagement

    Learner Engagement

    This post is a reply to Donald H Taylor 's post on the topic defining what we mean by Learner Engagement. I strongly…

    7 条评论
  • My observations on State of AI 2021

    My observations on State of AI 2021

    Nathan Benaich and Ian Hogarth’s The State of AI 2021 is finally here. In its 4th edition now, this is one of the…

  • The AI Talent Race

    The AI Talent Race

    The board has made AI a top strategic priority for your company and is ready to put the money where its mouth is. Good.

    1 条评论

社区洞察

其他会员也浏览了