Sam Altman's vision of the "Age of Intelligence"
Philippe Delanghe
AI Integration Strategist | Business Transformation Expert | 30+ Years in Tech
Sam Altman just wrote a piece here. https://ia.samaltman.com
I’m always interested by what he has to say – beyond being OpenAI’s face and spokesperson. I like his very broad vision. This last post is very interesting.
On X he is often vilified for lots of reasons and accused of hypocrisy; the charge being led by Elon Musk and, to a lesser extend, Gary Marcus.
I also digress a lot. let's start.
Disgression #1
I need to vent a bit about Elon. I liked it better when is was a visionary entrepreneur trying to save the world and not a culture warrior. I wish he would understand that a young adult willing to change his sexual identity is not necessary because he’s been brainwashed by the Radical Marxist Fascist Left, as his new orange buddy claims. It may be genetics too, or an absent father, who knows. I have a very good friend who transitioned from man to woman at 45, after multiple suicide attempts. It should remain a personal, intimate issue. Not a crusade involving the whole world.
It looks like when you cannot make peace with yourself, you go to war with the rest of the world. Especially when you feel humiliated.
The excellent book “Status Games” (Will Storr, highly recommended) tells the story of Lenin, being very upset at his middle-class family being ostracized after his brother Alexander Ulyanov got hung after trying to kill the Tsar. And then deciding to get revenge. Lenin did not give a sh!t about the proletariat. He wanted blood and revenge, disguised into ideological virtue. Does that sound familiar? Free speech everyone? Let’s eradicate the woke virus?
Back to Sam
Sam Altman always has this broad vision. I remember last year – I used to introduce my AI seminars by his sentence “the marginal cost of intelligence is getting down to zero, what will you do with that”. I also like very much “humanity always had been dealing with two constraints: energy and intelligence”. And pushing those limits since, say, the invention of fire : a more powerful way to use external energy than just eating, which is the original design.
Disgression #3
Another excellent book on the energy topic : "Burn", by Herman Pontzer. It’s more about human metabolism than anything but show how humans have been harnessing external calories/energy for their own comfort. Once you understand that one kilo of fat has the same amount of energy that one liter of gas, lots of things click. Get this external energy extracted from coal, put it in the tank of your car and go for miles and miles. You can also use your fat, you’ll lose weight but you’ll go was slower than driving your car :-).
Back to Sam
So – Sam’s vision is right on. Through science and accumulated knowledge we’ve been able to build chips from sand and water and heat, and we can power them with electricity we generate from nature and make them do stuff for us – the latest being some form of thinking.
What I found interesting is that he’s doubling down on deep learning (like Ilya) – and seems convinced that it’s obvious that more data and more compute will lead to AGI (that here he calls superintelligence).
The truth is – no one knows. Dario Amodei, the CEO of Anthropic and former OpenAI employee, is quite clear on the fact that the scaling "laws" empirical. Like Moore’s law. It’s been working like this for a while but it does not mean that there is no asymptotical limit – guys like Gary Marcus or Yann LeCun have an opposite view.
What’s quite sure is that we’re getting the compute increase, which will require the energy generation increase. Sam has already invested in multiple energy companies (Helion, Exowatt, Oslo). And he is setting a time horizon for super intelligence – a couple thousand days. Interesting to be counting in days, it makes it look like it’s sooner than counting in years or decades.
So if Sam is right we will know soon enough.
领英推荐
I like the story he’s telling. Who would not?
But it’s really a “silver lining” pitch.
The world today and humans don't seem ready to me for super intelligence. We’re still fighting about imaginary stories that emerged thousands of years ago. And power of course. Like controlling woman’s reproductive rights, or people’s sexual orientation. Or rebuilding ancient empires, for whatever reason. Or sticking to religious rules that were defined 1500 years ago.
Disgression #3
I need to invoke Yuval Harari’s new book, "Nexus", that actually connects all those dots that leaves most of us (definitively me) scratching our heads - how can human stupidity grow faster than artificial intelligence type of things. Free speech vs truth vs propaganda.
Harari talks at length about “information management and societies”, and how some organizations have “auto-correction systems” (science) enabling social progress, and others don’t, because they decided once for all that their story was the “truth” and could never change. Of course when the core is supposed to come from God, it cannot be changed. Read the book – or listen to one of his interviews.
Temporary conclusion
The road to super intelligence led abundance for all humans seems very tortuous right now, to the point that it may be just a messianic dream. Radical abundance is the promise of most religions - the caveat being only after you're dead and if you've been virtuous. Can we really achieve this on earth, through technology?
It's true that some of the words KPIs are getting better and better. Like the number of people living below poverty level divided by 2 in 20 years. And lots of other things? Thank you, capitalism.
It's also true that most of the world hates what western democracies represent – Russia, China, Iran. And then we internalize it, and get manipulated – with the help of current AI. And their ideology is stronger than the desire for human progress - quite the opposite actually, specially religions, by definition.
Then most democracies are not in very good shape and voters seem to want to elect strong men, “don’t worry I will fix it for you” politicians – when we actually know from history that it never works, because the world is more complicated that changing retirement age, or finding a scape goat, be it a successful businessperson, a jew, a refugee, an immigrant.
Will technology be stronger than politics ? Can technology supersede politics? Not really. States still have laws - and I would say for good reason. Will we be able to get AI into politics to help us make more rational decisions? I was reading the other day that ChatGPT is good at showing flat-earthers that they are wrong - but can probably do the opposite too. An AI should bring rational solutions to the table, but will we listen to it?
I hope that Sam is right and that superintelligence will also help us fix the ongoing geopolitical crisis, before it destroys us.
I’ll try to focus on the Intelligence age prosperity and abundance, and not on my fellow human brothers ready to elect wannabe dictators or killing each other for a piece of land. Not sure there is much I can do about it.
============
Under the hood:
Article slightly discussed with GPT 4-o, who gave me some good advice about readability.
Midjourney image
President, strategies.ai. Co-founder & board member, Hub France IA. Member, OECD Working Group on AI Futures, Collège Numérique France 2030. Affiliated with ESSEC Metalab for data, technology and society.
1 个月You are making interesting points as always. But you surf on Altman’s approximations, projections and faith… First, I don’t think he understands intelligence. Human intelligence is embodied, as neuroscientists say, meaning it’s not isolated in our brain, separate from the rest of our body. Lots to pull from this. Then, he is using a lot of magic thought by ignoring fundamentals of how deep learning works. He has faith (I don’t have a better word for it) in many things that he doesn’t even mention. Like the amount of data that needs to be created, acquired, stored, managed for these specialized personal assistant to exist at scale while being individually relevant. And its environmental lmpact. Is he investing his money on research to lower the energy cost of learning and inference? We need orders of magnitude improvement here. If he isn’t investing, then he’s just… asking you to have faith in him. Third, he focuses on a self serving vision, one that he sees a path to monetization at scale. I don’t blame him for this. But the evolution of deep learning and the emergence of a data powered society is about much more than the chapel he wants you to enter. It’s a cathedral, a temple, too large for any single human to conceive.