Icarus and the imitation game
(gut reactions to GPT-4o release)
Icarus
Recall the myth of Icarus?
Is a classic tale from Greek mythology that serves as a warning against hubris and overambition. According to the myth, Icarus is the son of Daedalus, a talented craftsman who creates the Labyrinth on the island of Crete to contain the Minotaur. After angering King Minos, Daedalus and Icarus are imprisoned within the Labyrinth. To escape, Daedalus constructs wings for himself and Icarus, made of feathers and wax.
Before they take flight from the island, Daedalus warns Icarus not to fly too high, as the sun's heat would melt the wax, and not too low, as the sea's dampness would clog the feathers. Ignoring his father's caution, Icarus, thrilled by the flight and moved by his curiosity of how high he can fly, flies too close to the sun. The wax in his wings melts, and he falls into the sea and drowns.
The Imitation Game
What has the Imitation Game have to do with this?
The "Imitation Game" is a thought experiment introduced by Alan Turin in his 1950 paper "Computing Machinery and Intelligence," which explores the question of whether machines can think. The goal of the Imitation Game, later known as the Turing Test, was to determine if a computer could exhibit intelligent behavior indistinguishable from that of a human.
The test fundamentally challenges the ability of a machine to imitate human responses under the conditions of the game. If the interrogator fails to reliably tell the machine from the human, the machine is considered to have passed the test, suggesting it possesses human-like intelligence.
While there are ongoing (and frankly sterile) debates on weather Generative Pretrained Transformers (GPTs) have or not passed the Turing test, we are already in a world where is pretty hard to distinguish if we are having an online conversation with a human or a machine.
Stay with me and I’ll connect the dots …
GPT-4o
To remind us of this (and to steal the planetary attention away from Google's I/O event starting today) OpenAI just released yesterday their latest model: GPT-4o.
The videos released together with the launch are a great indication of the direction the technology is heading. If you've seen a few of those already, the message that clearly OpenAI wants us to get away with is that AI is getting more human-like.
But there were also a number of messages that might have not been planned by the OpenAI PR machine but that jumped at me after the novelty flame has burned a few seconds after watching these videos.
1. Ethnocentric bias
I'm sure millions of Italians could not help but noticing the American accent the OpenAI bot had when translating the Italian conversation with OpenAI CTO Mira Murali.
And that's ok: I'm sure it won't be much of a challenge finding enough Italian voices for their training data (or maybe was design choice to avoid showing an AI more capable of speaking Italian than the CTO). That's not the point.
I chose the words Ethnocentric bias to capture the essence of viewing others through the lens of one's own cultural norms and values, which can distort understanding and interactions in diverse settings. And how these views get passed to the behavior of these powerful alien intelligent bots.
What would GPT-4o look like if it came out of a smart group of people in Beijing?
2. Who gets to decide what AI we'll build?
Reflecting a bit more about the design choices made by a specific group of smart engineers to mold the behavior of these AI agents, my mind goes immediately to the words of Yuval Harari when he talks about the Trolley Problem.
The Trolley Problem is an ancient philosophical dilemma that has been debated for thousands of years. In this thought experiment, a trolley is hurtling down a track towards five people who are tied up and unable to move. You have the power to pull a lever that will divert the trolley onto another track where there is only one person tied up. The dilemma revolves around whether it is more ethical to do nothing and allow the trolley to kill the five people or to pull the lever and actively cause it to kill one person.
Harari brings this up in his pointed sarcasm telling us that engineers at Silicon Valley companies are making this ethical decision today - literally when you think about the AI algorithms that goes into autonomous cars. And that CEOs at Silicon Valley companies have quarterly earnings to worry about and do not have the patience of philosophers who have debated this dilemma for centuries.
The honest answer to the original question is: those with the financial resources to build these large and powerful foundational models. The real question is: are we ok with this?
Fortunately, some large players have hoisted the Open-Source flag in the attempt to resist the current trajectory of a handful of winners take-all scenario we are heading towards, but the game is far from over and the road will be bumpy for few more years is my guess.
3. AGI as the promised land
What's also clear from the GPT-4o release is that the current debate around AGI is not as useful.
In a recent fascinating interview on the All-In Podcast, OpenAI CEO Sam Altam made a comment that alluded to this strategy: he described an approach where OpenAI will continue to incrementally improve their model which is actually getting better all the time.
While the media is trapping everyone attention on "will GPT5 be release this month or next" or "are we getting to AGI in 2 years or 7.9?", I have the clear feeling that AGI is sneaking in gradually in our lives and nobody will ring a bell when AGI finally arrives.
Is a smart narrative that keep eyeball glued to social media platforms, but people seem to have forgotten that a company vision (building AGI that will benefit all humanity) is not an operational milestone that you measure. Is not the destination but the north star of the company and as such is simply driving all efforts to go in that direction.
AGI is also portrayed as the promised land, a place where all human problems will be solved, and we'll cure all possible diseases. And that is also by design: because you need a lot of faithful followers to support you in the journey and justify all the bad thing that might go wrong on the way.
We need a more focused and pragmatic debate about how we are going to use the AI that is already here and will permeate transformation in Education, Labor Market and Healthcare in the next decade or two. If we don't, we'll have to live with patterns of adoptions that might bring us unintended consequences.
4. Mastering the Startup Playbook
The other thing that I can't help notice is how well is OpenAI playing the Startup Playbook. This should not be surprising given Sam Altam's experience as operator/startup founder and more importantly as member of Y Combinator.
One of the first thing you learn when you talk to advisors about building a startup is that you need to talk to your customers: they are the only ones who can tell you what are the things worth building. Building a killer product is a discovery process you do together with your customers rather than implementing a brilliant idea that popped up in your brain.
Well, if there is one company that has taken this to the extreme, this company is OpenAI.
Look at SORA. OpenAI have built some demo videos, they *announced it*, but have not *launched it* yet. Why? because by announcing it they get the benefit of an incredible amount of feedback (positive, negative and everything in between) so that they can now chose what is worth building, a ton of decisions even the smartest engineers can't take on their own without feedback from experts (watch the partnership with large media company space).
Right now, as I'm typing this text and sharing my reaction to the launch, I feel just part of this gigantic test OpenAI is doing live, with millions and millions of comments from people across the entire planet going straight into their data centers to be processed to LEARN what we all think about their latest model. It's a free, planetary, mechanical Turk they can use to improve next version and next version and so on.
I think this is one of their biggest competitive advantages (without diminishing the incredible engineering talent they have been able to assemble).
Where is Daedalus?
I'm an optimist. And as such I do believe in the tremendous good AI will do for humanity. I will leave this to another article in the future.
But right now, my feeling is that we have a teenager (OpenAI) that just like Icarus is very excited about his ability to fly and wants to test how high he can go in the Imitation Game and create human-like artificial systems that will blend in our lives.
Just like a teenager, Icarus feels invincible and optimistic that "we'll figure it out" and rather than pausing to consider the profound implications of this technology will have in our lives, is completely hypnotized by his ability to fly and wants to test his limits.
Does the enthusiastic son need a father that, like Daedalus, tells him that his wax wings will melt if he gets too close to the sun?
What do you think?