Reading Notes: Dario Amodei essay on the future of AI
Dominique Lahaix
CEO | Social Data | Social Intelligence | NLP | Artificial Intelligence | LLM
Machines of Loving Grace
If you have a chance, I highly recommend reading Machines of Loving Grace by Dario Amodei, CEO of Anthropic and creator of Claude: https://darioamodei.com/machines-of-loving-grace.
It is actually very interesting .. and very disturbing.
Machines of Loving Grace (click for the poem)
Amodei claims we’re very close of reaching extremely advanced AI by 2026. This lines up with Sam Altman’s forecast of super intelligence within the next 1,000 days: https://ia.samaltman.com/ . Note that both are avoiding to use AGI (Artificial General Intelligence) wording.
Of course, OpenAI and Anthropic are actively fundraising, so it's part of their founder job to sell their company and tell the world how amazing their work is.
Still, Amodei has always been known for his conservative position on AI—his vision for Anthropic has focused on keeping AI manageable, understandable, and reliable. His current essay, though, hints at a much faster, more powerful transformation than he's suggested in the past.
What I really liked about the essay:
For the rest of the essay and more specially the parts about government, politics and democracy, I am not so optimistic.
The assumption that progress is always good seems, to me, naive. Looking back at history—the Roman Empire, European colonialism, Slavery, the Native American genocide, and the Third Reich—when one nation holds a technological advantage, it often starts by using it to destroy its rivals.
In a more recent (tech) history, we already know with social media how technological advancements are used by bad actors. Platforms like Facebook (now Meta) have been implicated in scandals such as Cambridge Analytica, which used data to manipulate electoral outcomes. Twitter has repeatedly been criticized for enabling and amplifying divisive figures (starting with D Trump) and for misinformation, contributing to political polarization. Meanwhile, TikTok and Instagram face ongoing lawsuits concerning privacy breaches, discrimination, and terrible impacts on mental health.
Why should we expect the rise of AI to be different?
What will become of humans … and god?
As for the closing chapter. What will human do when they no longer are needed in most of the production process or progress. I frankly don’t know.
A few AI pioneers come to mind here:
So, we should start by treating our dogs kindly. Maybe the future for humans looks more like that of dogs—or even worse, as cattle or horses. At best? Cats !!
Last but not least, no mention of what would became of god if a superior AI intelligence is brought to “life”? All religions consider human life as unique and sacred. How will they adapt to a world run by AIs?
Now, I do have a one more question and I don’t frankly know if this is a naive or an “elephant in the room” question.
Why do VC invest in OpenAI, Anthropic if the end game is to make intelligence a commodity?
If powerful AI brings infinite intelligence, the value of intelligence would automatically drop to zero (supply and demand).
Companies like Anthropic and OpenAI are nothing else but “intelligence” : engineers with superior abilities, patents, know how … So why would an investor invest in companies whose ultimate goal is to disrupt/destroy the value of the IP/Intelligence that they are supposed to create?
Are they driven by pure philanthropy and the belief in a better world, or do they expect some kind of massive, shared prosperity?
I’d really love to see what that VC pitch looks like.
Helping SMEs automate and scale their operations with seamless tools, while sharing my journey in system automation and entrepreneurship
2 周When AI makes intelligence common, what happens to us? A big question for the future.?