Will AI Match Human Intelligence? It's Not a Matter of If, But When.
Concept and design by Ryan David Rhea. Image by Playground.com

Will AI Match Human Intelligence? It's Not a Matter of If, But When.

"The brain is the most complex thing we have yet discovered in our universe. It has been called an enchanted loom, where millions of flashing shuttles weave a dissolving pattern." -- Carl Sagan



Many people believe that human intelligence is so profound and uniquely special that it will never, could never, be matched by future artificial intelligence technology. They often point to consciousness and self-awareness, true excellence and originality in creativity, and the beauty imparted by the human "touch" in music, art, and writing as particular examples of things no machine could ever fully achieve on its own. They wax poetic about our capacity for emotional intelligence and the oft-used term "common sense," firmly believing that AI will almost certainly continue to fall short in these areas, amongst others. Are they right? Is the most complex, intelligent pattern-recognition and reasoning "machine" in the known universe, the human mind, simply untouchable in all its glory?

I would argue that the view that our brains will never be equaled or bested by machine intelligence deeply underestimates the incredibly fast progress of AI today and, at the same time, utterly fails to recognize inherent human cognitive limitations writ large. While narrow AI today is focused on very specific tasks, the computing power and sophistication of AI systems is growing exponentially based on a similar, but faster, trend to Moore's Law, which is sometimes referred to as the "AI Compute Trend" or the "AI Scaling Law". What seems impossible for machines today will be trivial in just a few years' time. We've all seen the incredible leap in quality of AI-generated images and videos over the course of a matter of months, and there is no doubt that this progress will only continue to accelerate in the coming months and years as language models and chipsets keep on improving and increasing in power and capability with each compute doubling (which, at the time of this writing, is about every 3.5 months!).

Additionally, we must set aside our hubris to honestly evaluate human intelligence and recognize just how far from perfect our own brains actually are. Humans are prone to cognitive biases, emotional manipulation, and irrational thinking in ways that future, well-designed AI may largely avoid. Without a doubt, our biological brains have inherent constraints. Yuval Noah Harari, in his book 'Homo Deus', suggests that the human mind is really nothing more than a series of simple to complex, layered algorithms capable of producing very complex outputs.

He also writes, "The idea that humans will always have a unique ability beyond the reach of non-conscious algorithms is just wishful thinking. The current scientific answer to this pipe dream can be summarised in three simple principles:

1.?????Organisms are algorithms. Every animal – including Homo sapiens – is an assemblage of organic algorithms shaped by natural selection over millions of years of evolution.

2.?????Algorithmic calculations are not affected by the materials from which the calculator is built. Whether an abacus is made of wood, iron or plastic, two beads plus two beads equals four beads.

3.?????Hence there is no reason to think that organic algorithms can do things that non-organic algorithms will never be able to replicate or surpass. As long as the calculations remain valid, what does it matter whether the algorithms are manifested in carbon or silicon?”

Taking this into consideration when evaluating the trajectory of AI technology, what does this portend for our understanding of consciousness itself? Moreover, how might that play out with ever more advanced LLMs, AGI, and/or future biological, organoid neural nets?

With the inevitability of Artificial General Intelligence (AGI) on the horizon, we must consider a future where machine intelligence meets and surpasses the full range of human cognitive capabilities (including all the touchy, feely ones). This is not to discredit nor downplay human intelligence but to suggest that AGI will be the next leap forward in superintelligence on the planet (barring a sudden visit from distant aliens). Recently featured in an article by Futurism, computer scientist and SingularityNET founder Ben Goertzel , who is well-known for his work on the humanoid robot known as "Sophia", made a bold prediction. Goertzel believes that there is a outside chance of AGI being achieved within the next five years, possibly even by 2027. This is a significantly shorter timeline compared to what many other experts in the field have predicted.

By combining different approaches, such as deep learning, evolutionary learning, and probabilistic reasoning, among others, it may well be possible to create an AI system that can learn and reason in a more general and flexible manner, akin to human intelligence. While we certainly must acknowledge the challenges and uncertainties involved in creating AGI, predictions like these serve as a reminder of the rapid advancements in the field and the potential for transformative breakthroughs in the near future.

I believe this will ultimately lead to a race for enhanced biological intelligence in order for us to keep up with the machines. I see very little advantage to uploading copies of our minds into the machines themselves (a la classic Kurzweil i.e 'Age of the Spiritual Machines') versus creating that "machine" from ourselves with hardware implants and biologics like organoids, gene augmentations with technologies like CRISPR, and other biological and technical enhancements we can scarcely imagine today. I don't think brain-computer interface (BCI) hardware like Musk's Neuralink will be the only way forward, as there are many obstacles and inherent dangers of routinely implanting hardware into healthy people's brains for something like a Neuralink technology to truly work as the end solution all on its own. For now, our untouched, organic brains are still the most advanced and energy efficient computers we currently know of, so why not start there (organically) instead of in the server room or via hardware in the brain? Imagine someday using CRISPR and organoids to create organic computers (wetware) within human bodies based on the host's own DNA… using DNA processors, DNA storage (already capable of storing vast amounts of data), and so on. What if someday we could harness all the cells in our bodies (like a cellular Dyson's sphere) to create a supercomputer directly connected to our brain?

Current resistance to germline DNA editing, cognitive augmentation, hardware augments, and general transhumanism will largely disappear once the AGI threat arrives and truly reveals itself (the equivalent incentive, perhaps, of that of giant UFOs suddenly appearing out of nowhere, parked over major cities).

But until we can ensure human equality with the machine intelligence that's coming, we should devote our efforts to ensuring that AGI systems are developed in alignment with our values and interests. Closing our eyes to this approaching reality is not an option, yet we see so many skeptics claim "fanboyism" and level accusations of hype from the industry as if AI itself were just a bubble or a money-grabbing farce not to be taken seriously. Certainly there are profiteering money grabbers, apologists, and plenty of people engaging in hype with this technology. But inevitably, the people who tend to believe that AI is somehow a silly pipe dream are in for some large and disappointing surprises in the years to come. An AGI is coming sooner or later, and we won't be prepared to cognitively meet it at its level, so we must do everything we can to ensure we are as safe from it as possible. Even under the best of conditions, however, this incredibly important task of ensuring we are safe from an emerging, superintelligent AGI may not even be achievable.

Mustafa Suleyman , DeepMind co-founder and prescient tech soothsayer, writes in his book 'The Coming Wave', "Humans dominate our environment because of our intelligence. A more intelligent entity could, it follows, dominate us. The AI researcher Stuart Russell calls it the “gorilla problem”: gorillas are physically stronger and tougher than any human being, but it is they who are endangered or living in zoos; they who are contained. We, with our puny muscles but big brains, do the containment."

He goes on to write, "By creating something smarter than us, we could put ourselves in the position of our primate cousins. With a long-term view in mind, those focusing on AGI scenarios are right to be concerned. Indeed, there is a strong case that by definition a superintelligence would be fully impossible to control or contain. An “intelligence explosion” is the point at which an AI can improve itself again and again, recursively making itself better in ever faster and more effective ways. Here is the definitive uncontained and uncontainable technology. The blunt truth is that nobody knows when, if, or exactly how AIs might slip beyond us and what happens next; nobody knows when or if they will become fully autonomous or how to make them behave with awareness of and alignment with our values, assuming we can settle on those values in the first place.”

I believe a benevolent, democratically accessible, safeguarded AGI has immense potential to help solve currently intractable problems and push forward human knowledge and capabilities (including the aforementioned biologics advancements to help equalize us with the machines by means of our own native, biological superintelligence). Current resistance to germline DNA editing, cognitive augmentation, hardware augments, and general transhumanism will largely disappear once the AGI threat arrives and truly reveals itself (the equivalent incentive, perhaps, of that of giant UFOs suddenly appearing out of nowhere, parked over major cities). But realizing that cognitive potential in ourselves, while mitigating the risks of artificial intelligence in the meantime, is still a long ways off and will require dedicated work and collaboration between human and machine intelligence. The era of AGI is coming faster than many anticipate. Will we be able to mitigate the risks successfully? Will we be ready? Can we be ready?

Nancy Bain

Building proven AI automation systems for solopreneurs | Cut admin time by 50% in 28 days | Google Workspace & Gemini Expert

12 个月

An insightful and thought provoking read.

Yash N.

Aspiring ML Engineer

12 个月

The rapid progress in AI technology should prompt us to reconsider our beliefs about human intelligence and its boundaries. Claiming that our brains will always surpass machine intelligence is shortsighted and fails to recognize the vast potential of AI to tackle complex issues and enhance human progress.

要查看或添加评论,请登录

Ryan David Rhea的更多文章

社区洞察

其他会员也浏览了