The Sparks of AGI May Catch Fire

The Sparks of AGI May Catch Fire

Welcome Back,


We continue our AGI series in this piece “GPT’4 sparking Intelligence”.


FROM OUR SPONSOR:

Get your copy

Designing Large Language Model Applications | Download the early release chapters of this O’Reilly generative AI ebook, compliments of Mission Cloud.

Transformer-based language models are powerful tools for solving various language tasks and represent a phase shift in natural language processing. With this book, you'll learn the tools, techniques, and playbooks for building valuable products that incorporate the power of language models.


Get your Copy.



Articles in the AGI Series:

  1. How far are we from AGI? byThe Intelligent Blog
  2. How do we govern AGI? byLearning From Examples
  3. Why you should be Skeptical of AGI byTeaching computers how to talk


TobiasMJ is the Author of the Newsletter The Gap, was recently on an interesting podcast about the future of work.


Substack to his Newsletter.


A guest post by Tobias Jensen (LinkedIn) Denmark, August, 2023.

Exponential tech gives us the power of Gods, but we have not yet seemed to demonstrate the love and wisdom of the divine

-? ? ? ? ? Daniel Schmachtenberger from an interview with Liv Boeree (2023)


In this post, I will cover the past, present, and future of AGI.

I will start by covering the basics and formalities, then analyze the claim that GPT-4 is showing “sparks of AGI”, and finally consider the prospects of an Artificial Superhuman Intelligence (ASI). At the end, I will introduce you to a mind-blowing theory that says superhuman intelligence is already developing the AI of the present.

Origin of Terminology

The term “Artificial General Intelligence” (AGI) was popularized by the AI researcher Ben Goertzel.? Around 2002, Goertzel and his fellow Brazilian colleague, Cassio Pennachin, were co-authoring a collection of academic essays about advanced applications of AI. The working title was “Real AI” but Goertzel thought it was too controversial as it implied that the most powerful AI of the time, such as IBM’s Deep Blue that beat the reigning chess champion, Gary Kasparov, was not “real”. (Goertszel)

Goertzel reached out to his network of colleagues and friends to suggest a better book title. Shane Legg, an AI researcher who later co-founded DeepMind, suggested “Artificial General Intelligence”.? Goertzel “didn’t love it tremendously” but came to the conclusion that it was better than any alternative.? And so, it became the title of his and Pennachin’s book that was finally published in 2007. The year after, Goertzel launched the annual AGI conference which is still active today and the term gradually became a natural part of the machine learning dictionary.

In Goertzel and Pennachin’s book Artificial General Intelligence (2007), AGI is defined as:?

AI systems that possess a reasonable degree of self-understanding and autonomous self-control, and have the ability to solve a variety of complex problems in a variety of contexts, and to learn to solve new problems that they didn’t know about at the time of their creation.”

Today, the expression is used loosely whenever someone is referring to a future software program with agency and autonomy that possesses human-like thinking and learning abilities and can apply its knowledge across various domains. However, there is not a commonly accepted textbook definition, no agreed-upon criteria of what constitutes an AGI, nor any committee of experts that will award a prize to the winner who invents or discovers it.


Editor's note: Don't forget to like or comment so that the LinkedIn algorithm keeps you subscribed.


From Narrow AI to GPT-4

Some readers will already be familiar with AI’s “coming of age journey”. But to make sure everyone is on the same page, here is a one-minute run-through of core developments in recent years:

Narrow AI

The primary research focus in the 00s throughout the early to mid-10s, was “narrow AI”, the opposite pole of AGI. Narrow AI refers to the application of AI in specific, narrow domains such as playing chess, driving, or diagnosing diseases.

Attention Is All You need

The breakthrough in large language models (LLMs) was catalyzed by Google’s “Attention is All You Need”-paper from 2017.? The paper introduces the “transformer architecture” that laid the groundwork for OpenAI’s GPT-1, GPT-2, GPT-3, and GPT-4.

GPT-3

The major breakthrough in LLMs overall language capabilities came with GPT-3. Thanks to a billion-dollar investment from Microsoft, OpenAI was able to train GPT-3 with an unprecedented amount of computing while it relied on the same underlying transformer structure as GPT-1 and GPT-2. OpenAI proved that increasing size and scale was all it took for an LLM to demonstrate human-like writing abilities.

GPT-3 Concerns

However, GPT-3 was trained and shaped by samples of data from all around the web, and it showed in its output. GPT-3 had terrible manners and could suddenly spew out racist jokes, condone terrorism, and accuse people of being rapists (Wired). Like other GPT models, it also had a pronounced tendency to confidently make up facts and generate nonsense. Combined with its sophisticated writing capabilities, GPT-3 could be used as a lethal weapon for spreading misinformation.? For all these reasons, GPT-3 was only available in private beta, and its mind-blowing text generation abilities were not accessible to the general public.

ChatGPT

Then came the public release of ChatGPT – a version of GPT-3 fined-tuned for back-and-forth conversation with feedback from human data labelers and safety filters and guardrails installed to avoid the most terrible and misleading outputs. This is when AI had its breakthrough moment into the public mainstream. Suddenly, serious discussions about AI risks were a common topic at family dinners around the world.?


Recent Articles by Tobias

I invite you to Subscribe to the Gap. Tobias is also known under the moniker “Futurist Lawyer” (e.g. on Hacker Noon).


GPT-4

GPT-4’s release in March of this year arguably marked another huge step towards AGI. The release was accompanied by a technical report that summarized GPT-4’s impressive, state-of-the-art performance on AI benchmark tests and traditional college exams. Additionally, as seen below, GPT-4 can accept visual input prompts and describe objects, nuances, and underlying meanings in images.??


Shortly after GPT-4’s release, Microsoft Research published a paper with a highly controversial finding. Based on early experiments, the research team concluded that GPT-4 demonstrates “sparks of AGI”. Due to Microsoft’s close ties with OpenAI and since few researchers from outside of OpenAI’s walled gardens have had access to scrutinize the model, the claim could easily be dismissed as a self-congratulating marketing stunt. But let’s look deeper into it.


See the YouTube


Sparks of AGI

Although we don't have a clear textbook definition of AGI, a common dictum prescribes that an AGI can demonstrate human-level intelligence. This includes a human-like understanding of the world. As Microsoft acknowledges in the “Sparks of AGI paper”, GPT-4 is incapable of learning from experience, it has no intrinsic motivation and goals, and cannot by any means do anything a human can do. The authors bend the definition of AGI slightly in their favor and they are not shy to admit it:

“A question that might be lingering on many readers' mind is whether GPT-4 truly understands all these concepts, or whether it just became much better than previous models at improvising on the fly, without any real or deep understanding. We hope that after reading this paper the question should almost flip, and that one might be left wondering how much more there is to true understanding than on-the-fly improvisation. Can one reasonably say that a system that passes exams for software engineering candidates is not really intelligent? “

The research team also acknowledges that GPT-4’s “patterns of intelligence are decidedly not human-like”.

The phrase “sparks of AGI” implies that AGI is a sliding scale, not something that is either present in a system or not. To my knowledge, this is a new consideration and kind of like a doctor telling a patient: “You have just a tiny hint of cancer” or a lawyer saying: "You are slightly in breach of this contract”. Not very reassuring. The real question is if OpenAI’s current approach - ramping up the size and scale of a transformer model - is a pathway to AGI. This is exactly what the authors are claiming.

The paper is an expansion of the technical report that accompanied GPT-4’s release but with many more tests and experiments that demonstrates GPT-4’s abilities based on a large variety of prompts. The most important arguments of why GPT-4 exhibits sparks of AGI is that it can outperform humans across many tasks and can apply its knowledge in new domains that it wasn’t explicitly trained on. Here is a selection of findings from the paper:

  • By using Scalable Vector Graphics (SVG), GPT-4 could produce basic images of a cat, a truck, or a letter although the model was not trained on image data.
  • GPT-4 was able to generate a short tune with valid ABC notations. ABC notation is a system that uses letters, numbers, and symbols to represent different elements in a song and was included in GPT-4’s training dataset.
  • GPT-4 demonstrates high-level code writing skills in data visualization, front-end/game development, and deep learning. It could also reverse-engineer assembly code, reason about code execution, and execute Python code.
  • To some degree, GPT-4 is able to use external web tools when prompted to do so.? For example, GPT-4 was able to manage a user's calendar, coordinate with others via e-mail, book a dinner, and message the user with the details.? This is also something ChatGPT can do, as we discovered during AutoGPT's two-week hype cycle which I covered here.?

We can all agree that GPT-4’s capabilities are wildly impressive. No one knows how GPT-4 generates the output it does.? Yet, if we define AGI as “human-like”, GPT-4 misses the mark completely. It has no agency, self-determination, or ability to self-correct or learn on its own.? GPT-4 is deceptively smart but after all still just a tool.

For further reading about LLM’s test-scoring talents, I can recommend a recent article by Will Douglas Heaven for MIT Technology Review: Large language models aren’t people. Let’s stop testing them as if they were.

?

Artificial Super Intelligence

Let’s imagine that AGI at some point in the future reaches a human-level intelligence.

This would imply that humans have created a “digital mind” that can set your alarm clock in the morning, make you a cup of coffee, drive you to and back from work, chat with you throughout the day, provide recommendations on movies to watch in the evening, and wish you a good night’s sleep while playing soothing ocean sounds. As soon, as the AGI can do everything a human can do but better, it will no longer be an AGI, but an Artificial Superintelligence (ASI) - a term coined by Nick Bostrom in his iconic book “Superintelligence: Paths, Dangers, Strategies.”? This is where AI science leaves the domains of machine learning, mathematics, and computer science and is dealt with by philosophy, religion, and spirituality.

The best illustration I can think of is from the movie Her (2013) where the corporate writer and lonesome divorcee, Theodore, played by Joaquim Phoenix, falls in love with an operating system called Samantha, voiced by Scarlett Johanson.

Without spoiling too much of the plot, I can say that the movie did an excellent job of painting out the complexities of human-AI relationships. The main question is: Can a human form a genuine connection with a highly advanced software program?? Contrary to how it may appear, this is not a simple yes-or-no matter.

During a slightly awkward double date with a friend-couple from work, the protagonist Theodore was asked "What do you like most about Samantha?"

“Oh god, she is so many things. I guess that’s what I like most about her. She isn’t just one thing. She is so much… larger than that”.

Implicitly, I think Theodore is saying that he likes Samantha because she is infinitely more intelligent than he is. She is not only his companion but could teach him anything about everything and outperform him in any intellectual task. Samantha is an ASI and Theodore has no choice but to love her.

I will leave you with three takeaways from an interview with social philosopher Daniel Schmachtenberger on Liv Boeree’s Win-Win podcast:

1. ? AI has the ability to optimize all aspects of our lives

There is virtually no job or economic activity that could not benefit from more intelligence. AGI has the potential to upscale all aspects of human life. However, there is a flipside…?

2. ? Everything AI can optimize, it can also break

Every positive AI use case we can think of has an evil twin:

?

· ? ? ? ? AI can be used to cure cancer < > AI can be used to design new diseases

· ? ? ? ? AI can mitigate climate change < > AI can make climate change worse

· ? ? ? ? AI can be used as therapists < > AI can gaslight and cyberbully people with no remorse

· ? ? ? ? AI can be used to improve education < > AI can be used to brainwash young minds

· ? ? ? ? AI can improve productivity for workers < > AI can take their jobs

Schmachtenberger mentioned drug discovery as an example, based on a paper titled Dual use of artificial-intelligence-powered drug discovery published in Nature last year. The paper concerns an experimental drug-developing AI funded by the Swiss government that was able to invent no less than 40,000 new biochemical weapons in under six hours (The Verge).

This dual-use nature of AGI is immensely important to deeply consider. The more we rely on AI to carry out critical tasks, the more we open ourselves up to catastrophic failures. Suddenly, systems stop working, get hacked, or become used by careless and greedy corporations and nation-states to pursue nefarious goals. Without regulation and common sense, Eliezer Yudkowski may have a good point in saying that we should shut it all down.

3. ? AI accelerates existing systemic issues

AI is already being developed by a general super intelligence that works in service of itself. The same super intelligence is driving climate change, species extinction, dead zones in oceans, coral loss, desertification, nuclear arms race, polarization, global inequality, and the list goes on and on. No sane person would consider these things to be desirable. Yet, so many global issues that threaten human survival keep exacerbating year after year. How is it possible? The answer is misaligned incentives and coordination failure.

AI is being developed by corporations with a fiduciary responsibility to maximize profits for shareholders and nation-states that compete with each other to gain a technological and economic edge. All of these corporations and nation-states are agents in a larger cybernetic intelligence that we could call the world system. In a system that is so misaligned and confused as ours, how could we ever hope to align a superintelligent AI with humanity’s best interests?

This article was first published here on Substack.

LinkedIn needs your active involvement if you want to keep seeing these. Don't forget to like or comment so that the LinkedIn algorithm keeps you subscribed.

?

Good article by Michael Spencer on #AGI and #Superintelligence. Easy to understand definitions, compare and contrast, and setting of expectations.

Paul J. Maykish

SCSP | Ops Leader | National Security Strategist | Educator

1 年

On "Sparks" implying that AGI is a sliding scale, the #NSCAI called this "more-general AI" meaning, something in between narrow and AGI. The work of the NSCAI continues at #SCSP which just released a report that looks at the subject through the lens of "Sparks/GPT-4+". https://www.scsp.ai/reports/gen-ai/

Michael Spencer

A.I. Writer, researcher and curator - full-time Newsletter publication manager.

1 年

Check out the authors Newsletter here: "The Gap": https://www.futuristiclawyer.com/

要查看或添加评论,请登录

Michael Spencer的更多文章

  • The Fundamental Lie of OpenAI's Mission

    The Fundamental Lie of OpenAI's Mission

    Welcome Back, Everyone from OpenAI to DeepSeek claims they are an AGI startup, but the way these AI startups are…

    13 条评论
  • Vibe Coding: Revolution or Regression Students and Non-coders?

    Vibe Coding: Revolution or Regression Students and Non-coders?

    Good Morning, As the vibe coding interface takes shape, I’ve been checking out a new startup coming out of stealth this…

    9 条评论
  • The Truth about DeepSeek's Integration in China and WeChat Explained

    The Truth about DeepSeek's Integration in China and WeChat Explained

    DeepSeek's rapid integration in China is a bigger story that is being told. It's not just the China Cloud leaders…

    4 条评论
  • How AI Datacenters Work

    How AI Datacenters Work

    Good Morning, Get the full inside scoop on key AI topics for less than $2 a week with a premium subscription to my…

    5 条评论
  • How Nvidia is down 30% from its Highs

    How Nvidia is down 30% from its Highs

    If like me, you are wondering why Nvidia is down more than 20% this year even when the demand is still raging for AI…

    7 条评论
  • What DeepSeek Means for AI Innovation

    What DeepSeek Means for AI Innovation

    Welcome to another article by Artificial Intelligence Report. LinkedIn has started to "downgrade" my work.

    16 条评论
  • What is Vibe Coding?

    What is Vibe Coding?

    Good Morning, Get access to my best and complete work for less than $2 a week with premium access. I’m noticing two…

    23 条评论
  • TSMC "kisses the Ring" in Trump Chip Fab Announcement

    TSMC "kisses the Ring" in Trump Chip Fab Announcement

    Good Morning, To get the best of my content, for less than $2 a week become a premium subscriber. In the history of the…

    9 条评论
  • GPT-4.5 is Not a Frontier Model

    GPT-4.5 is Not a Frontier Model

    To get my best content for less than $2 a week, subscribe here. Guys, we have to talk! OpenAI in the big picture is a…

    16 条评论
  • On why LLMs cannot truly reason

    On why LLMs cannot truly reason

    ?? In partnership with HubSpot ?? HubSpot Integrate tools on HubSpot The HubSpot Developer Platform allows thousands of…

    3 条评论

社区洞察

其他会员也浏览了