Just like us
Carmelo Iaria
AI Strategy Consultant | Perplexity Business Fellow | Empowering Businesses with Agentic AI Systems
why treating AI agents like humans is wrong but may actually be useful
alien intelligence
Are we alone in this incredibly vast universe? We haven't figured out the answer yet, have we?
This question has puzzled us for thousands of years. In a space so vast that we can hardly comprehend its size, how is it possible that we humans are the only intelligent creatures? Many people believe it's improbable, if not impossible, that we're alone. We just haven't figured out how to find other intelligent civilizations beyond our little rock.
Consider the amount of fiction we've produced over the past 50 years and the attention these works have received, as evidenced by box office success. Here are some references:
- Avatar (2009) - Approximately $3.957 billion
- E.T. (1982) - Approximately $2.917 billion (adjusted for inflation)
- Star Wars (1977) - Approximately $3.563 billion (adjusted for inflation)
The other directions the human fascination with alien intelligence has been explored in the past 50 years is by building it: if we can't find strong enough evidence of alien intelligence, perhaps we can study our own and attempt to replicate it artificially.
It's fascinating how scientific discovery progresses non-linearly. For years, hundreds of people have conducted experiments to understand how our brains work and to build computing models that simulate learning and intelligence. Yet, few outside research circles are aware of these efforts.
Then, a breakthrough happened. Researchers experimented with the transformers neural network architecture, training it on vast data sets, and stumbled upon a promising technique. Another smart group (OpenAI) packaged this solution in a user-friendly chat interface, leading to the AI gold rush we see today:
- Microsoft: Combined investment in OpenAI, Mistral and AI infrastructure might total close to $70B by the end of 2024.
- Meta: Plans to invest between $35B and $40B by the end of 2024.
- Google and Amazon: Invested around $27B in AI in 2023 alone.
The motivation for these investments obviously goes well beyond the desire to build alien (artificial) intelligence but the connection still stand.
The fascination doesn't stop with chatbots and copilots. Companies are solving computational efficiency and mechanical engineering problems to create intelligent robots. Imagine giving AI the one thing it lacks to become more human: interaction with the physical world. Welcome to the world of embodied artificial agents.
Soon, if you're looking for alien intelligence, you won't need to look at the sky for UFOs but in your basement for the robot doing your laundry!
Boston Dynamics has been working on these challenges for years, but their PR isn't as bold as Elon Musk's. Musk recently claimed Tesla could (one day) sell 20 billion Optimus robots. Admire his vision-setting skills, but without a set date, why not? However, we might not need to wait until then. Tesla has about 4 million cars on the streets, many of which are upgradable to Full Self-Driving (FSD), effectively turning them into specialized, albeit driving, AI agents much sooner than you may think.
领英推è
anthropomorphizing
It's important to highlight that an AI agent performing tasks reserved for humans doesn't make it intelligent.
The question of what intelligence is goes much deeper than we can address in a casual article, but we should consider this: assuming we will interact with more AI agents daily, how should we treat them? What is the right way to address them?
You might find yourself anthropomorphizing AI: "I asked this and that and he or she answered this!" or asking the chatbot "What do you think about XYZ?"
Anthropomorphizing means projecting human-like qualities onto AI, even though it doesn't possess them. This can create several risks:
- False Sense of Attachment and Trust: Vulnerable groups like children might form bonds with AI, influencing them in ways that could impact their ability to form trusting relationships with humans.
- Manipulation: Malicious actors could exploit the trust built with AI agents, making it easier to manipulate individuals.
- Complex Responsibility and Accountability: In corporate environments, the boundary between trust and responsibility becomes difficult to navigate if we anthropomorphize AI agents.
This is a complex topic better explored through dialogue. For more on anthropomorphism in AI, consider these articles:
But what does it mean "it may actually be useful" I put on the subtitle?
one useful thing
Ethan Mollick's newsletter, ["One Useful Thing"], offers practical recommendations for adopting AI. Mollick's latest book, [Co-Intelligence: Living and Working with AI], suggests interacting with AI chatbots as you would with a colleague. This approach is an interesting "hack" because AI can be brilliant and very dumb at the same time. And making mistakes, just like humans.
Current AI systems can generate text, images, videos, audio, and music, but they aren't infallible. Assuming AI is more human-like than superintelligent helps lower, rather than increase, trust in these tools. Recognizing that AI can fail encourages caution in taking AI-generated content at face value.
Mollick also reminds us that AI chatbots are designed to please the user. They aim to make us happy rather than provide accurate information, often making things up rather than admitting uncertainty. Keeping this in mind helps avoid taking AI responses at face value.
If Seneca was right when he said "Errare humanum est" (to err is human), then I would say current AI is very human.
See this post from Andrej Karpathy https://eurekalabs.ai/