Why everyone seems to disagree on how to define Artificial General Intelligence
[Source Photo: rawpixel]

Why everyone seems to disagree on how to define Artificial General Intelligence

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. I’m Mark Sullivan , a senior writer at Fast Company, covering emerging tech, AI, and tech policy.

This week, I’m looking at the ongoing debate around the timeline for? Artificial General Intelligence. Also, Stanford has a new transparency report card for AI developers, and Marc Andreessen goes off-leash in his latest blog entry.

If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here . And if you have comments on this issue and/or ideas for future ones, drop me a line at [email protected] , and follow me on X (formerly Twitter) @thesullivan .


At the TED AI conference in SF, little consensus on AGI?

We are just now seeing the first applications of new generative AI, but lots of people in the AI field are already thinking about the next frontier—Artificial General Intelligence. AGI was certainly on the minds of many at the TED AI conference in San Francisco Tuesday. But I didn’t hear a lot of consensus about when AGI systems will arrive, nor how we should define it in the first place.

The term AGI usually describes systems that can learn to accomplish any intellectual task that human beings can perform. Others say AGI refers to systems that can learn completely new tasks without the help of explicit instructions or examples in their training data. Ilya Sutskever, the chief scientist at OpenAI (whose goal is to eventually build AGI systems ), gave a fairly conventional (if vague) definition on stage at the TED conference on Tuesday, saying meeting the bar for AGI requires a system that can be taught to do anything a human can be taught to do. But OpenAI has used a less-demanding definition in the past—defining AGI as systems that surpass human capabilities in a majority of economically valuable tasks. One source at the event who spoke on the condition of anonymity told me that AI companies are beginning to manipulate the definition of the term in order to lower the bar for claiming AGI capabilities. The company that can first achieve some definition of AGI would get lots of attention—and probably an increase in value.

Most of the AI industry believes that transformer models (like the one that powers ChatGPT) are the path to AGI, and that dramatic progress on such models has shortened the timeline for reaching that goal. Microsoft researchers say they’ve already seen “sparks” of AGI in GPT-4 (Microsoft owns 49% of OpenAI). Anthropic CEO Dario Amodei says AGI will arrive in just two to three years. DeepMind co-founder Shane Legg predicts that there is a 50% chance AGI will arrive by 2028.?

The definition matters because it could affect the pace at which AI companies focus on building safety features into their models to help mitigate the potential harms of such systems, which are very real. Not only could powerful AGI be used by bad actors to harm others, but it seems possible that such systems could even grow and learn independent of human beings. Obviously, tech companies should be spending a lot of time and energy on safeguarding the models they’ve already built. And they are investing in safety (and certainly talking a lot about it). But a type of arms race is underway, and the economic carrot of building bigger and more performant models is overwhelming any idea of developing AI in slower, safer, ways.?


Stanford releases its transparency report card for AI titans

Earlier today, Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) released its inaugural Foundation Model Transparency Index (FMTI), plainly laying out the parameters for judging a model’s transparency. Co-developed by a multidisciplinary team from Stanford, MIT, and Princeton, FMTI grades companies on their disclosure of 100 different aspects of their AI’s? foundational models, including how the tech was built and how it’s used in actual applications.?

HAI was among the first to warn against the dangers of large AI models, and to suggest the tech ought to be developed in the open, in full view of the AI research community and the public. But as AI becomes a big business, and competition to build the best models intensifies, transparency has suffered, says HAI’s Percy Liang, who directs the Stanford Center for Research on Foundation Models.?

This initial version of the index, which Liang says will be updated on an ongoing basis, grades the 10 biggest model developers (OpenAI, Anthropic, Meta, et al). It finds that indeed there’s lots of room for improvement. Meta (Llama 2) and Hugging Face (BLOOMZ) were the only companies that scored higher than 50% on transparency. Interestingly, Anthropic, an offshoot of OpenAI with a focus on safety and transparency, scores lower than OpenAI.


Marc Andreessen: The poster boy for Silicon Valley’s “naive optimism”?

In his recent book, The Coming Wave, DeepMind cofounder Mustafa Suleyman describes the Silicon Valley “naive optimist” as someone who willingly ignores the possible ill effects of new technology (in this case, AI) and presses forward without giving much thought to building in safeguards. Think, one who moves fast and breaks things (like children’s self-esteem, or democracy). Superinvestor Marc Andreessen’s latest screed , titled “The Techno-Optimist Manifesto,” seems to epitomize everything Suleyman warns against. Andreessen, whose net worth reportedly sits at around $1.8 billion , is a longtime investor in AI companies, and stands to reap major rewards if some of his bets pay off. Here are a couple of rich excerpts from Andreessen’s piece:?

  • “We believe Artificial Intelligence is our alchemy, our Philosopher’s Stone—we are literally making sand think.”

  • “We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.”

  • “Our enemy is the Precautionary Principle, which would have prevented virtually all progress since man first harnessed fire. . . . It is deeply immoral, and we must jettison it with extreme prejudice.”

Nowhere in Andreessen’s piece, as Axios’ Ryan Heath wisely points out , do the words? "unintended consequences,” "global warming," or "climate change” appear.

While people including OpenAI’s Sam Altman publicly call for regulations on AI development, I suspect that many Silicon Valley tech leaders agree with Andreessen. Many believe that AI will bring unprecedented wealth and abundance, and they can’t wait to realize those rewards. But, if Andreessen’s manifesto is any guide, there’s still a dearth of concern for the consequences.


More AI coverage from Fast Company:

From around the web:

要查看或添加评论,请登录

Fast Company的更多文章

社区洞察

其他会员也浏览了