Accelerating to 2027?

Accelerating to 2027?

The AI Debate is on Fire


I had not heard of Leopold Aschenbrenner until yesterday. I was meeting with Faraj Aalaei (a SignalRank board member) and my colleague Rob Hodgkinson when they began to talk about “Situational Awareness,” his essay on the future of AGI, and its likely speed of emergence.

So I had to read it, and it is this week’s essay of the week. He starts his 165-page epic with:

Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them.

So, Leopold is not humble. He finds himself “among” the few people with situational awareness.

As a person prone to bigging up myself, I am not one to prematurely judge somebody’s view of self. So, I read all 165 pages.

He makes one point. The growth of AI capability is accelerating. More is being done at a lower cost, and the trend will continue to be super-intelligence by 2027. At that point, billions of skilled bots will solve problems at a rate we cannot imagine. And they will work together, with little human input, to do so.

His case is developed using linear progression from current developments. According to Leopold, all you have to believe in is straight lines.

He also has a secondary narrative related to safety, particularly the safety of models and their weightings (how they achieve their results).

By safety, he does not mean the models will do bad things. He means that third parties, namely China, can steal the weightings and reproduce the results. He focuses on the poor security surrounding models as the problem. And he deems governments unaware of the dangers.

Although German-born, he argues in favor of the US-led effort to see AGI as a weapon to defeat China and threatens dire consequences if it does not. He sees the “free world” as in danger unless it stops others from gaining the sophistication he predicts in the time he predicts.

At that point, I felt I was reading a manifesto for World War Three.


But as I see it, the smartest people in the space have converged on a different perspective, a third way, one I will dub AGI Realism. The core tenets are simple:
Superintelligence is a matter of national security. We are rapidly building machines smarter than the smartest humans. This is not another cool Silicon Valley boom; this isn’t some random community of coders writing an innocent open source software package; this isn’t fun and games. Superintelligence is going to be wild; it will be the most powerful weapon mankind has ever built. And for any of us involved, it’ll be the most important thing we ever do.?
America must lead. The torch of liberty will not survive Xi getting AGI first. (And, realistically, American leadership is the only path to safe AGI, too.) That means we can’t simply “pause”; it means we need to rapidly scale up US power production to build the AGI clusters in the US. But it also means amateur startup security delivering the nuclear secrets to the CCP won’t cut it anymore, and it means the core AGI infrastructure must be controlled by America, not some dictator in the Middle East. American AI labs must put the national interest first.?
We need to not screw it up. Recognizing the power of superintelligence also means recognizing its peril. There are very real safety risks; very real risks this all goes awry—whether it be because mankind uses the destructive power brought forth for our mutual annihilation, or because, yes, the alien species we’re summoning is one we cannot yet fully control. These are manageable—but improvising won’t cut it. Navigating these perils will require good people bringing a level of seriousness to the table that has not yet been offered.?
As the acceleration intensifies, I only expect the discourse to get more shrill. But my greatest hope is that there will be those who feel the weight of what is coming, and take it as a solemn call to duty.


I persisted in reading it, and I think you should, too—not for the war-mongering element but for the core acceleration thesis.

My two cents: Leopold underestimates AI's impact in the long run and overestimates it in the short term, but he is directionally correct.

Anthropic released v3.5 of Claude.ai today. It is far faster than the impressive 3.0 version (released a few months ago) and costs a fraction to train and run. it is also more capable. It accepts text and images and has a new feature that allows it to run code, edit documents, and preview designs called ‘Artifacts.’


Claude 3.5 Opus is probably not far away.

Situational Awareness projects trends like this into the near future, and his views are extrapolated from that perspective.

Contrast that paper with “ChatGPT is Bullshit,” a paper coming out of Glasgow University in the UK. The three authors contest the accusation that ChatGPT hallucinates or lies. They claim that because it is a probabilistic word finder, it spouts bullshit. It can be right, and it can be wrong, but it does not know the difference. It’s a bullshitter.

Hilariously, they define three types of BS:

Bullshit (general)
Any utterance produced where a speaker has indifference towards the truth of the utterance.
Hard bullshit
Bullshit produced with the intention to mislead the audience about the utterer’s agenda.
Soft bullshit
Bullshit produced without the intention to mislead the hearer regarding the utterer’s agenda.

They then conclude:

With this distinction in hand, we’re now in a position to consider a worry of the following sort: Is ChatGPT hard bullshitting, soft bullshitting, or neither? We will argue, first, that ChatGPT, and other LLMs, are clearly soft bullshitting. However, the question of whether these chatbots are hard bullshitting is a trickier one, and depends on a number of complex questions concerning whether ChatGPT can be ascribed intentions.

This is closer to Gary Marcus's point of view in his ‘AGI by 2027?’ response to Leopold. It is also below.

I think the reality is somewhere between Leopold and Marcus. AI is capable of surprising things, given that it is only a probabilistic word-finder. And its ability to do so is becoming cheaper and faster. The number of times it is useful easily outweighs, for me, the times it is not. Most importantly, AI agents will work together to improve each other and learn faster.

However, Gary Marcus is right that reasoning and other essential decision-making characteristics are not logically derived from an LLM approach to knowledge. So, without additional or perhaps different elements, there will be limits to where it can go. Gary probably underestimates what CAN be achieved with LLMs (indeed, who would have thought they could do what they already do). And Leopold probably overestimates the lack of a ceiling in what they will do and how fast that will happen.

It will be fascinating to watch. I, for one, have no idea what to expect except the unexpected. OpenAI Founder Illya Sutskever weighed in, too, with a new AI startup called Safe Superintelligence Inc . (SSI). The most important word here is superintelligence, the same word Leopold used. The next phase is focused on higher-than-human intelligence, which can be reproduced billions of times to create scaled Superintelligence. The Expanding Universe of Generative Models piece below places smart people in the room to discuss these developments. Yann LeCun, Nicholas Thompson, Kai-Fu Lee, Daphne Koller, Andrew Ng, and Aidan Gomez are participants.

Matt Cartwright BEM 柯明龙

Operations, AI Sustainability, and Leadership | Founder and Host of the Preparing for AI Podcast | China specialist | Whisky expert | Kids baseball coach

5 个月

Excellent read as always. I will be recommending this to all the friends who have an interest in AI but are not going to go anywhere near the full article, or even Zvi Mowshowitz's summary. I'm probably in agreement with you that it is somewhere between Aschenbrenner and Marcus. I have real issues with Aschenbrenner's naive view that the US would neccessarily develop AI in an altruistic way. My concern being with big tech as much if not more than the US government

回复
Roger Sanford

勒索软件和恢复是全球威胁!Hcare作为解决方案。

5 个月

Keith, you are perhaps the last bastion of a Free Press and independent thinking. “America carrying the torch of liberty” is a stretch in these fear based greed driven times, IMHO. Iniatives like the United Nations AI ethics group (my friend and associate Brianna Brownell is part), efforts like Heidi Lorenzen’s “The Community Code”, Stanfords work and of course our own ethics based AI for Healthcare are all participants in giving Superintelligence a conscious! XI and Putin (& Co.) would use Superintelligce to enslave, repress and control. America must treat this NOT as an Amican dominance issue; that was the wrong conclusion of WW I l. This MUST be a global agreement for the good of humanity and our evolution. This ongoing battle is a classic epoch of right vs wrong, heart vs logic, good vs evil, power over vs power through. Only this time IMHO, there will be no “do-overs” no “mulligans” no “Ooops”. This time it’s for all the chips. Einstein famously said, “I don’t know what weapons will be in World War III but I know World War IV will be fought with sticks and stones.”

要查看或添加评论,请登录

Keith Teare的更多文章

  • Why Private Indexes, Publicly Listed, are set to Replace Fund Investments for Most Family Offices and HNWI's

    Why Private Indexes, Publicly Listed, are set to Replace Fund Investments for Most Family Offices and HNWI's

    SignalRank is participating in a virtual conference with 1000+ LPs, Family Offices, Funds of Funds, and GPs of VC, PE…

    1 条评论
  • Elon, Silicon Valley and Government

    Elon, Silicon Valley and Government

    An Opportunity or a Problem? There is one winner regarding the most significant story this week. Elon Musk and Vivek…

    5 条评论
  • Good Morning America: AI, Trump and the Future

    Good Morning America: AI, Trump and the Future

    Oh boy, where to start? You all know I voted for Kamala. I was never a rabid fan.

    1 条评论
  • Disrupt Edition

    Disrupt Edition

    The Age of AI is just beginning. It has been a few years since I attended TechCrunch Disrupt.

  • Agents and Robots: It is definitely happening

    Agents and Robots: It is definitely happening

    NotebookLM Weekly Briefing NotebookLM read this week’s newsletter. Its briefing summarizes critical themes and…

    3 条评论
  • AI: Where to Invest?

    AI: Where to Invest?

    Consumer, Cloud, Enterprise? This week I was a guest on Brent Leary and Paul Greenberg’s CRM Playaz. The topic was…

  • OpenAI is a Multi Trillion Dollar Opportunity

    OpenAI is a Multi Trillion Dollar Opportunity

    Don't Be "Precautionary" OpenAI’s $6.5 billion venture round produced lots of reaction this week.

    2 条评论
  • AI and Venture Capital: How to Invest?

    AI and Venture Capital: How to Invest?

    Aileen Lee is the excellent leader of Cowboy Ventures. In a week when OpenAI raised $6 billion in new equity financing…

  • Individual Freedom and Global Companies

    Individual Freedom and Global Companies

    Can Government Help Innovation? Ulrike Franke’s essay that heads up this week’s ‘Essays of the Week’ starts with a…

  • The False Gods of Optimism and Pessimism

    The False Gods of Optimism and Pessimism

    The Future Has to be Built Editorial I often write about being an optimist here. But this week, Peter Thiel was asked…

    1 条评论

社区洞察

其他会员也浏览了