The World is not Ready for AGI

The World is not Ready for AGI

While AI intrigues and scares us, a case could be made that in 2021, it's not very 'intelligent' or 'artificial' at all. The holy grail of AI is a system that can learn on its own, independent of context.

While human-like artificial general intelligence may not be imminent, substantial advances may be possible in the coming years. Some scientists at DeepMind, owned by Google, think AGI is possible just with reinforcement learning.

If you enjoy articles like these, sign up to the Last Futurist, my speculative blog where we explore AI, stocks, tech innovation and breaking news. Simply put your Email on the top right side of the home page. We are also seeking more columnists so if you like to write pitch me here.

Waiting for the Singularity (or the world to change)

We live in a world of algorithms and weak AI, but that won't be forever. Some AI researchers think by around the year 2060 we'll reach a singularity where AI will become smarter than we can imagine today.

At the Last Futurist we are not sure if this happens on other planets that develop sentience but in a remarkable twist of fate, a mature AI and climate change that's significantly disruptive might overlap considerably. Will the mature AI be able to help deal with climate change disruptions?

Deep learning and Big Data are among the latest approaches, and advocates argue that they will be able to realize AGI. In their decades-long chase to create artificial intelligence, computer scientists have designed and developed all kinds of complicated mechanisms and technologies to replicate vision, language, reasoning, motor skills, and other abilities associated with intelligent life. 

It's not entirely clear if AGI is possible within our lifetimes.

Given how technology improves in exponential waves, and the supercomputers getting into the mix with transformers, one wonders if deep learning and reinforcement learning can make a breakthrough. Or will it require something else?

Is Reward Enough?

In a new paper submitted to the peer-reviewed Artificial Intelligence journal, scientists at UK-based AI lab DeepMind argue that intelligence and its associated abilities will emerge not from formulating and solving complicated problems but by sticking to a simple but powerful principle: reward maximization.

Titled “Reward is Enough,” the paper, which is still in pre-proof as of this writing, draws inspiration from studying the evolution of natural intelligence as well as drawing lessons from recent achievements in artificial intelligence. 

Some scientists believe that assembling multiple narrow AI modules will produce higher intelligent systems. Could reinforcement learning be enough?

Billions of years of natural selection and random variation have filtered lifeforms for their fitness to survive and reproduce. So it will be interesting to see how machine intelligence evolves and how quickly it will reach some state of general intelligence or some aspects of a less weak AI.

Finally, the researchers argue that the “most general and scalable” way to maximize reward is through agents that learn through interaction with the environment. DeepMind hopefully is doing safe work. Some of their papers lead to surprisingly conclusions.

In the paper, the AI researchers provide some high-level examples of how “intelligence and associated abilities will implicitly arise in the service of maximizing one of many possible reward signals, corresponding to the many pragmatic goals towards which natural or artificial intelligence may be directed.”

Artificial General Intelligence is a Pandora's Box

Inventing AGI could be a Pandora's box that humanity might not be ready for.

We live in a world where the legal and regulatory framework isn't even well adapted to the internet, never mind weak AI or AGI. Hopefully AGI is pushed to the later part of the 21st century, if at all. At the Last Futurist we fear humanity is not ready ethically and socially for any sense of general intelligence in our AI development. 

  • Recently the partner of Elon Musk Grimes (Canadian Claire Elise Boucher,) joked that "AI is actually the fastest path to communism,” said Grimes in a TikTok video. It's not really a joke since China is the best bet to invent AGI, and globally export their version of a social credit system governed by AI as a more 'advanced form' of surveillance capitalism.

Given society's deep entrenched wealth inequality and lack of laws regarding the regulation of AI ethics, algorithms, the internet, biotechnology, CRISPR, gain of function research and so forth, it's doubtful the Earth is ready for AGI.

For our safety and to minimize extinction events, we must hope that AGI is far off in our distance future and not something that might spontaneously occur soon.

Nicholas Stuart

"" Love me or hate me, both are in my favor. If you love me, I will always be in your heart. If you hate me, I will always be in your mind." William Shakespeare.

3 年

Diana Pederson, thank you. Speaking as an average human being who is sometimes bamboozled by a television remote and the dialogue that is carried on between my cellular and my desktop systems leaving me more confused, I say slow down and rethink what we are stepping into. Don't get me wrong I like robots, androids, AI and such, but when these our creations can out think you in a game of chess it gets aggravating and I begin to feel like that ape man in 2001 a Space Odyssey beating the keyboard with a bone. Truthfully I see the greatest advancement for AI will be in medical applications as we are generally an aging ( and in many cases obese ) population in all high tech countries, China*, America, Japan, Europe and the Commonwealth countries. *China more aging then obese.

回复
Diana Pederson

Founder and CEO at Dragonfly MedTech

3 年

What are we looking to solve with AGI/AI? It too often seems to be an end in itself because it is cool. Once made, it is available to everyone - those who have good intentions/use, and those who have bad intentions. That is where it gets creepy. There have been a number of interesting movies that explore the implications of how AI and robotics may be applied and their adverse impacts on society.

回复
Paul Franzen

A resourceful and performance-driven executive with a solid history of transforming and driving engineering and operational excellence for business growth, with optimal efficiencies across the organization.

3 年

Interested

回复
Robert Williams

[email protected] ?? Secure your enterprise by hiring the best talent. #Cybersecurity #Consultancy #Recruitment #Training #Data Science #Software Engineering #Agile #Scrum #Big Data #ETL #AI #Telecoms

3 年

The Singularity by 2060? I was reading Ray Kurzweil's predictions and he has the Singularity at 2045, I know it's all a bit academic, but nobody is questioning the fact that there will be a Singularity?

Michael Valliant

Test and measurment automation developer with strong product bringup experience in both R&D and production environments.

3 年

When the AI takes over the garbage mans job there will still be a guy on that truck or maybe in a comfortable office to hit the E-stop. The AI future looks more like the Jetsons than Terminator. When the singularity comes you will recognize it when you see the look on your grandkids faces when you go on about shopping carts and steering wheels you had to touch with your hands...

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了