AI's quest for general intelligence

AI's quest for general intelligence

In my daily research for interesting articles, there's always one article that talks about either Artificial General Intelligence or consciousness.

AGI, is the ultimate goal of machine learning and AI research.

But what is the measure of a generally intelligent machine?

I will be presenting the results of my research in this article.

If you stick around till the end, you can get a copy of my latest book: Exploring consciousness - a guide for AI students ($35 at Amazon)

Before we start!

If you like this topic and you want to support me:


  1. Comment on the article; LinkedIn appreciates that and it will really help spread the word ??
  2. Connect with me on Linkedin ??
  3. Subscribe to TechTonic Shifts to get your daily dose of tech ??


What is Artificial General Intelligence

As expected for such a complex and impactful topic, definitions vary:

  • I would define AGI as an intelligence that is not specialized in any particular task, as has historically been the case with AI. Most AIs today are focused on one problem, and they’re extremely good at solving that problem—often better than humans. As an example, an AI beat human experts at chess more than 20 years ago, but that AI could not read a book, plan its day or do anything else that humans can. Other AIs exist for assessing bank loans, diagnosing diseases, forecasting natural disasters and so on. In contrast, artificial general intelligence could do all of these things.
  • ChatGPT defines AGI as “highly autonomous systems that have the ability to outperform humans at nearly any economically valuable work. AGI is often contrasted with narrow or specialized AI, which is designed to perform specific tasks or solve particular problems but lacks the broad cognitive abilities associated with human intelligence. The key characteristic of AGI is its capacity for generalization and adaptation across a wide range of tasks and domains.“



Richard Feynman is a well known physicist (article: Richard Feynman's notebook method in a modern age with Obsidian and Zeta Alpha). He once said "I imagine a day when machines don't just calculate equations, but debate the meaning of life over a glass of wine. When that day comes, we'll know they've caught up to us—at least in spirit".

And in 1970 computer scientist Marvin Minsky predicted that soon-to-be-developed machines would “read Shakespeare, grease a car, play office politics, tell a joke, have a fight”. Years later the “coffee test”, often attributed to Apple co-founder Steve Wozniak, proposed that AGI will be achieved when a machine can enter a stranger’s home and make a pot of coffee.

Albert Einstein, in all of his wisdom was a bit afraid of AGI: "I do not fear the machines that calculate faster than us. I fear the machines that can understand our jokes and still choose to stay silent".

Given the recent OpenAI news, it is particularly opportune that OpenAI’s chief scientist, Ilya Sutskever, actually presented his perspective on AGI just a few weeks ago at TED AI. here are some takeaways:

  • He described a key tenet of AGI as being potentially smarter than humans in anything and everything, with all of human knowledge to back it up.
  • He also described AGI as having the ability to teach itself, thereby creating new, even potentially smarter AGIs.

Sutskevers' TED talk on AGI:

As we can see, AGI spans many dimensions. The ability to perform generalized tasks implies that AGI will affect the job market far more than the AIs that preceded it. For example, an AI that can read an X-ray and detect disease can assist doctors in their work.

However, an AGI that can read the X-ray, understand the patient’s personal history, make a recommendation and explain that recommendation to the patient with a kind beside manner could conceivably replace the doctor entirely. The potential benefits and risks to world economies and jobs are massive. Add to those the ability for AGIs to learn and produce new AGIs, and the risk becomes existential. It is not clear how humanity would control such an AGI or what decisions it would make for itself.


Few people are in agreement

Experts in computer and cognitive science, and others in policy and ethics, often have their own distinct understanding of the concept (and different opinions about its implications or plausibility). Without a consensus it is difficult to interpret announcements about AGI or claims about risks or benefits.

And meanwhile, AGI is popping up like happy mushrooms on a dead piece wood.

I see it with increasing frequency being mentioned in press releases, interviews and computer science papers.

Microsoft researchers declared last year that GPT-4 shows “sparks of AGI” and at the end of May this year, OpenAI confirmed that it is training its next-generation machine-learning model. And that one would have the “next level of capabilities” on their “path to AGI”.

Meanwhile renowned futurist Ray Kurzweil has predicted decades ago that AGI would happen before 2030. Also Softbank CEO Masayoshi Son kicked up the AI storm. He claimed that AGI will become a reality in 2030 as well.

He backed up his claim, saying AGI will be 10 times more capable and powerful than human intelligence. How real or surreal the claim is debatable. But nobody has the perfect prophecy on the capabilities of AGI. Whether AGI will solve real-world problems or create real problems for the new world? Will AGI shape up as contemplated, or will it remain a futuristic fantasy?

But to know how to talk about AGI, test for AGI and manage the possibility of AGI, we’ll have to get a better grip on what it actually describes.

Joanna Bryson, an ethics and technology professor at the Hertie School in Germany who was immersed in AI research at the time, viewed it as a "pejorative". This term created an arbitrary and potentially harmful divide within the AI research community.

As Bryson sees it, the rise of "AGI" split computer scientists into two camps: those considered to be doing the "meaningful" work of pursuing human-like capabilities, and everyone else, seen as "spinning their wheels" on "limited" and "frivolous" aims (like ChatGPT, Midjourney, Suno, Deepnudes, etc.).

But the twist was that many of those "narrow" goals, like teaching computers to master games, or to chat with us, to make music, or automate processes, ended up making significant contributions to machine intelligence.

And that layed the groundwork for leaps forward.


The path to AGI and Artificial Super Intelligence

Even when the definition to a machine matching or exceeding human intelligence is simplified, we are grappliung with another elusive concept: intelligence itself.

Gary Lupyan, a cognitive neuroscientist and psychology professor finds it very tricky to define or quantify intelligence, and especially "general intelligence."

Lupyan points out AI researchers' "overconfidence" when discussing intelligence and measuring it in machines.

Intelligence is a multifaceted, deeply human trait we're still working to understand in ourselves. And let alone in artificial creations. This overconfidence of course stems from the magic that AGI promises us.

But in our eagerness, we are probably underestimating the magnitude of the challenge.

As you go deeper into concepts underpinning AGI, like intelligence and generalizable cognition, you will notice that it will become more complexity and uncertain. Perhaps this leaves us with humility and recognition that the path to AGI is going to be long and winding.

I think we may need to temper our confidence and be prepared to grapple with the profound intricacies of the human mind.

There is even more skepticism around general intelligence.

Alison Gopnik is a psychology professor, and he echoes Lupyan's criticism. He mentions that "there's no such thing as general intelligence, artificial or natural". Different problems require different cognitive abilities, and no single form of intelligence is universally applicable.

He states that the notion of one-size-fits-all intelligence may be fundamentally misguided.

He said that the path forward may lie not in the pursuit of a singular, all-powerful AGI, but in the cultivation of a rich ecosystem of specialized intelligences, each contributing its unique strengths to the tapestry of human knowledge and accomplishment.

And that ties in neatly with a recent (August 31st, '24) with Amodei of Anthropic (Claude)


The future of AGI according to Anthropic and OpenAI

According to Anthropic CEO Dario Amodei, the future of AGI may resemble more of a corporate takeover than a robot uprising.

He gave a recent interview on the Econ 102 podcast where he discussed the company's efforts to develop a structure for completing tasks via a network of AI models.

He described it as as "big models orchestrating small models," involves larger models creating up to hundreds of smaller, faster, and more efficient models to perform specific tasks.
Amodei's vision of the future of AI is like a typical corporate infrastructure, with Claude (Anthropics model), at the top. 

Below that would be several, foundational models trained in broad domains like math, programming, analytics, emotions, sentiment analysis, etc. And further down the hierarchy, we would find even more specialized models dedicated to a specific tasks. And at the bottom we would find more entry-level, one-off models that are designed for short-term use.        

Amodei's take on AGI is a bit more cautious then Altman's (read: The AGI hype = Bored Apes).

He talks about the importance of AI safety and transparency of the algorithm.

Personally, I think that is a whole lot of baloney.

Because AI is a black box we can't seem to open.

If we can't even understand how our current NN's work, how would we be able to grasp how AGI works?

He believes that as we move closer to developing AGI, we need to let these systems act in a way that aligns with our human values to prevent potential risks.

Now, that's a quadrupled baloney.. Humans inherently do not have any values. Moral, and values are a social construct.

And an AI does not have any friends.

Where it boils down to is that Amodei's focus is on making AI more interpretable, and controllable. And he wants rigorous research to understand the behavior of AI (read the black box article) and to build systems that are reliably aligned with human goals, because unaligned AGI could be significant.

But we need to agree upon what defines AGI first before we can speak of its risks.

OpenAI defines AGI as "highly autonomous systems that outperform humans at most economically valuable work". Sam Altman suggests AGI is a continuum of progress rather than a single moment.


Is AGI the same as consciousness?

When I talk about AI and AGI, people sometimes ask me, "Is that the same as being conscious?"

Spoiler alert: Not really.

[Skip to the next topic....]

Marc Wittmann (a renowned AI researcher at the Frontier Areas of Psychology in Freiburg), believes AI can't be conscious because it's not alive.

You can unplug a computer, store it for a hundred years, and turn it back on like nothing happened. Humans, on the other hand, are always changing. Consciousness is about experiencing time and change and not about just crunching numbers.

This brings me to discuss sentience

That is the ability to experience and be aware of sensations.

Wittmann thinks that consciousness is about experiencing time and change, not just running algorithms.

HAL 9000 from 2001: A Space Odyssey might seem conscious, but without biological processes, it's just a good actor.

Yet, on the other opposite are futurists like Ray Kurzweil, and they think that AI will reach human-level intelligence by 2029 and consciousness soon after. He thinks sophistication will lead to consciousness.

I think this is like saying that a fast enough plane will turn into a spaceship. The problem is the fundamental difference between machines and living organisms.


Multiple levels of consciousness

If an AI says, "I think, therefore I exist", should I take that seriously?

The AI of course borrowed it from Descartes' famous "I think, therefore I am". But does an AI saying that it is conscious mean it actually is?

AI consciousness is a complex, multi-layered issue.

Consciousness is really hard to pin down what it actually is.

When we say a machine is "intelligent", we might mean that it can solve problems, and learn, or even make decisions.

But consciousness?

That's a whole different ball game.

Let's do a gedankenexperiment [I just like this term.....]

You are at a party, and there is a person who knows everything about everything. 

Now, that same person starts reflecting on their own existence and the meaning of life. You probably have a friend that does that on occasion (with or without alcohol). 

Now we're talking consciousness. 

Intelligence is about learning and applying knowledge; consciousness involves awareness, experience, and self-reflection. AGI could theoretically ace a math test, but it wouldn't have a panic attack about the results.        

Melanie Mitchell says AGI is more of a long-term ambition than a clear, present reality.

Experts like Joanna Bryson think that AGI hype creates a weird divide between "serious" AI work and "AI lite". Gary Lupyan, a cognitive neuroscientist, claims that defining "intelligence" is like nailing Jell-O to a wall. He thinks AI researchers are too confident about measuring machine intelligence, and the concept of "general intelligence" is slippery. Google DeepMind tried to bring order with six levels of AI intelligence, but even this leaves many questions unanswered.

Google DeepMind's six levels of AI intelligence:

1. No AI: Systems that do not exhibit any artificial intelligence capabilities. These are purely mechanical or rule-based systems with no learning or adaptation.

2. Emerging AI: Systems that have basic AI functionalities, such as simple pattern recognition or data classification and are limited to very narrow and specific tasks.

3. Competent AI: AI systems that perform at a level comparable to unskilled humans in certain tasks. That would be the level of AI we were at in 2023.

4. Expert AI: AI systems that match or surpass skilled human performance in specialized domains. They lack generalization beyond those fields. That is the level we are at in 2024.

5. Virtuoso AI: AI systems that demonstrate exceptional intelligence in a wide range of tasks, beyond expert-level performance in specific fields. Like IQ 150+ PHD level in each terrain. That is what most people think of, when referring to AGI.

6. Superhuman AGI: That would be a highly autonomous systems that would outperform humans in almost all economically valuable work and demonstrate abilities that are waaaay beyond the average human in both specific and general tasks.        

In their framework, Morris and colleagues at Google DeepMind focused on practical demonstrations of AI capabilities rather than underlying mechanisms.

According to their proposal, large language models like ChatGPT and Gemini qualify as "emerging AGI", because they are performing "equal to or somewhat better than an unskilled human" at a "wide range of nonphysical tasks, including metacognitive tasks like learning new skills."

DeepMinds' 6 levels of AI capabilities

Buuuuuut, also this definition leaves a lot of room for unresolved questions about exact tasks for evaluation, distinguishing "narrow" from "general" systems, and things like establishing human skill level benchmarks.

Etc. Yadayadayada

And that is because both Intelligence, AND consciousness cannot really be defined, because they are an inherent property of life.

And we do not understand that either.

So, Morris acknowledges wisely that determining correct tasks for comparing machine and human skills remains "an active area of research".

Huamn tests like the SAT score or the bar exam (which ChatGPT aced), do not distinguish between AI that just regurgitates training data and one that is showing genuine learning and adaptability.

Performance on specific benchmarks do not reflect the ability to apply knowledge and skills in the context of the real-world.


Is consciousness a fundamental part of the universe?

Brian Josephson,he is a Nobel Prize-winning physicist, suggests that consciousness is a fundamental part of the universe itself, and not just a side effect of neurons firing. He is not talking about "finding inner peace", but rather consciousness being as essential as gravity or electromagnetism.

And I quite agree with him, but that is worthy of a whole article in itself.

This aligns well with the concept of panpsychism. Panpsychism says that consciousness is a basic property of all matter. Even your morning coffee might have a bit of awareness (though probably more concerned with getting cold than existential questions).

It is a radical idea that is challenging conventional physics and just makes us think about what we know about reality.


Will AGI Skynet us?

For those not following The Terminator franchise, Skynet is a fictional, human-created, machine network that becomes self-aware and decides to destroy humanity. I don’t think this is cause for major concern.

While certain parts of the AGI definition (particularly the idea of AGIs creating future AGIs) are heading in this direction, and while movies like The Terminator show a certain view of the future, history has shown us that harm caused by technology is usually caused by intentional or accidental human misuse of the technology.

AGI may eventually reach some form of consciousness that is independent of humans, but it seems far more likely that human-directed AI-powered weapons, misinformation, job displacement, environmental disruption, etc. will threaten our well-being before that.


What can I do?

I believe the only thing each of us can do is to be informed, be AI-literate and exercise our rights, opinions and best judgement. The technology is transformative. What is not clear is who will decide how it will transform.

Along these lines, less than a month ago, U.S. President Joe Biden issued an executive order on AI, addressing a wide range of near-term AI concerns from individual privacy to responsible AI development to job displacement and necessary upskilling. While not targeted directly at AGI, these orders and similar legislation can direct responsible AI development in the short term—prior to AGI—and hopefully continuing through to AGI.

It is also worth noting that AGI is unlikely to be a binary event—one day not there and the next day there. ChatGPT appeared to many people as if it came from nowhere, but it did not. It was preceded in 2019 and 2020 by GPT 2 and GPT 3. Both were very powerful but harder to use and far less well known. While ChatGPT (GPT3.5 and beyond) represented major advances, the trend was already in place.

Similarly, we will see AGI coming. For example, a Microsoft research team recently reported that GPT-4 has shown signs of “human reasoning,” a step toward AGI. As expected, these reports are often disputed, with others claiming that such observations are more indicative of imperfect testing methodologies than of actual AGI.

The real question is: What will we do about about AGI before it arrives?


So, where does this leave us

So, where are we now in the great AGI and consciousness debate? We’ve got scientists, philosophers, futurists, and physicists all weighing in, each with their own theories, predictions, and caveats. Some say AGI will change everything; others argue that without true consciousness, it’s just another tool—albeit a very sophisticated one. The truth is, we’re still in the early days of understanding both AGI and consciousness. What’s clear is that we need a multidisciplinary approach, bringing together the best minds in philosophy, science, and ethics to navigate the uncertain waters ahead.

In the end, AGI might not just be about creating machines that think like us but about understanding ourselves in the process. After all, maybe the real AGI was the friends we made along the way.

??

If you have come this far, you are a die hard geek as I am. And I thank you from the bottom of my heart for sticking around with me for so long.

And I have something for you:


Solve this little math puzzle, and get a copy of my latest book ($35, Amazon)

"Exploring consciousness: a guide for AI students"


Here's another equation:

What is: 7×4?10+6=?

Solve it, and post your answer in the comments below and I'll contact you!


Thank you for sticking around !

Signing off - Marco


Well, that's a wrap for today. Tomorrow, I'll have a fresh episode of TechTonic Shifts for you. If you enjoy my writing and want to support my work, feel free to buy me a coffee ??

Think a friend would enjoy this too? Share the newsletter and let them join the conversation. LinkedIn appreciates your likes by making my articles available to more readers.



2024 Research used for this article:




Hira Ahmad

ML Engineer | MS in Computer Science

1 个月

I’m not sure if machines will ever surpass human intelligence, but right now, they are certainly better at learning from their mistakes.

Charles Riggle

Business Strategy / Innovation Management / Digital Transformation / Organizational Change / AI IoT Robotics 5G / Lean Startup / Growth Acceleration

1 个月

Hey Marco, fascinating piece. It doesn't seem so far fetched to me that AGI will develop enough faculties that by the time it starts ruminating on its own experiences and exhibits self-reflection we will have a hard time discerning whether it is conscious or simply capable of emulating those attributes. At that point will knowing the difference matter? Normally, math puzzles in geeky posts result in an answer of "42", but in this case I'll settle for "24".

Simon Au-Yong

Bible lover. Founder at Zingrevenue. Insurance, coding and AI geek.

1 个月

If a machine came into my home and made a pot of coffee I would call the police ?? ---------- I have a simple AGI filtering test - I'd ask an autonomous system what its hunch was about the direction of Bitcoin price trends. I don't want to hear theories, I don't want some old ChatGPT rambling generalisation, I just want a gut feel. That thought experiment proves my point - AGIs have neither hunches nor gut feel - hence they cannot exist outside the fantasies of the leadership of AI vendors nor their adoring and rather fierce fan hordes.

要查看或添加评论,请登录

社区洞察