Reactions to 'Situational Awareness'

Reactions to 'Situational Awareness'

Amy and I dropped Adam off at Union College for his sophomore year yesterday. Union recently renamed its sports teams from the "Dutchmen" to the "Garnet Chargers."

What hell is a "Garnet Charger" -- a horse? something you use to power your phone? I get that they wanted to pay homage to their location in Schenectady, the "electric city." But why not the "Dynamos?"

Before we headed to upstate New York, I finally finished "Situational Awareness" by Leopold Aschenbrenner, which everyone and his cat has asked me about. (Note the slightly less abrupt transition.)

My read: everybody has been talking about this paper for the wrong reasons -- forget about the predictions about; focus on what companies and democratic countries need to do to manage GenAI risks.

First off, I want to hear how/if Aschenbrenner uses GenAI to write this paper. Clearly a ton of work went into this — I want to know how much Aschenbrenner accelerated the process by using the tools at his disposal (and which ones!)

Lots of buzz about Aschenbrenner predicting that orders-of-magnitude increases in scale of LLMs will to?lead to artificial general intelligence in the next decade. The brain is a big neural network, and in the next few years the world will build models with as many connections as exist in the human brain. Oh, and GenAI models will be able to help build the next generation of GenAI models. That’s it.

(As a note, none of this detracts from GenAI as a devastatingly effective tool. Imagine the ability to build and run agent based models of economic competition between companies and strategic competition between states.)

In Jonathan Haidt’s book “The Happiness Hypothesis,” (which everyone should read), he explains for a layperson how brain scientists currently understand very parts of different parts of the brain work together. I probably understood about a third of it, but I managed to take away that the brain, unlike a neural network, is not a monolith.

In particular, Haidt?makes a distinction between conscious and subconscious processing, both of are essential in getting yourself to the coffee shop to buy a cup of coffee. He doesn’t speak at all about GenAI or LLMs, but I wonder if they more closely resemble the the?incredibly fast pattern matching that our sub-conscious minds perform than the thinking we associate with our conscious.

Aschenbrenner doesn’t mention brain science at all in his missive. This may not be surprising. MIT Technology Review interviewed neuroscientist and entrepreneur Jeff Hawkins about the relationship between brain science and AI research. He argues:

The sections on AI challenges are the most compelling. Training next generational models will require massive amounts of compute consuming massive amounts of power — that training the models of 2030 could require 20 percent of current US generational capacity. Will democratic states create regulatory regimes that enable this type of power consumption, or will GenAI go “off-shore” to states with more hospitable regulatory regimes? Also, do GenIAI leaders have the cybersecurity programs in place to prevent totalitarian states from exfiltrating precious intellectual property. Aschenbrenner suspects not, and asks why GenAI leaders don‘t have cybersecurity programs like the ones at quantitative hedge funds. He also pushes for more public-private partnership here, which may be much harder to make work.

Tom Hoffman

AI & Automation ???? | Gemba | ex-Fremantle

1 个月

The question of whether we're pursuing neural complexity or simply optimizing functional algorithms struck a chord with me. It's interesting to think about how we might need to rethink our entire approach to AI development, especially as we grapple with the regulatory and environmental implications. Perhaps it's not just the technology but the philosophy behind it that needs reimagining.

回复
Godwin Josh

Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer

1 个月

It's commendable that you're urging a shift in focus from speculative AGI discussions to tangible actions against GenAI risks. The rapid pace of development in this field can feel overwhelming, making it easy to get caught up in hypothetical scenarios. How are you seeing companies and governments specifically address the ethical implications of data bias in GenAI models?

回复

要查看或添加评论,请登录

James Kaplan的更多文章

  • Education, Podcasts and Enterprise Tech Skill-Building

    Education, Podcasts and Enterprise Tech Skill-Building

    As I have noted before, we live in an age of wonders for the truly geeky. In seconds, you can start reading almost any…

    17 条评论
  • Good technologists can and should write well!

    Good technologists can and should write well!

    Back from wintry Rhode Island, in wintry NYC and headed to wintry Chicago tomorrow, where there will be rain, snow and…

    13 条评论
  • Early Perspectives on Spectre & Meltdown

    Early Perspectives on Spectre & Meltdown

    Two newly discovered vulnerabilities, dubbed Meltdown and Spectre, are making headlines and raising questions about…

    1 条评论

社区洞察

其他会员也浏览了