AI Hype Or Reality: The Singularity – Will AI Surpass Human Intelligence?
AI Hype Or Reality: The Singularity – Will AI Surpass Human Intelligence?

AI Hype Or Reality: The Singularity – Will AI Surpass Human Intelligence?

Thank you for reading my latest article AI Hype Or Reality: The Singularity – Will AI Surpass Human Intelligence? Here at LinkedIn and at Forbes I regularly write about management and technology trends.

To read my future articles simply join my network by clicking 'Follow'. Also feel free to connect with me via Twitter , Facebook , Instagram , Podcast or YouTube .


Artificial intelligence (AI) is going to change the world – but there’s still a lot of hype and hot air around it!

Understanding what’s fact, what’s fiction, and what’s marketing spiel is essential if you want to take advantage of it.

One claim that's being made with increasing frequency is that machine intelligence will, at some point – perhaps soon – surpass human intelligence.

This is known as the “singularity” and is important for many reasons – chief among them being that it will mark the first point in human history that we will share the planet with entities that are smarter than us!

But is it really likely to happen? Or is it just a concept created by technologists and futurists to sell us a vision of the future where they end up making a lot of money? Let’s take a look!

?

What Is The Singularity And How Far Off Is It?

The concept of the singularity was popularized by science fiction author Vernor Vinge in the 1990s. It stems from the idea that once machines are able to learn for themselves, they will inevitably, at some point, surpass humans in every way that we are able to benchmark intelligence.

There are many facets of what we call “intelligence”. Two of them are memory and ability to calculate, and machines have long outpaced us at both of these. Now, with the emergence of new forms of AI, they are becoming creative, communicative, capable of advanced language skills, and even reasoning, problem solving and emotional intelligence .

Proponents of the singularity theory point out that machine intelligence has the potential to accelerate exponentially. As AI gets smarter, it will be able to design even smarter AI by itself, with no need for input from us.

They also point out that it could have highly unpredictable consequences. Can we trust it to always have our best interests at heart? Will it develop feelings of superiority or self-preservation that will make it a risk to us? Many very intelligent people think this is a real danger!

On the other hand, a more optimistic view is that it could usher in an era of unprecedented technological advancement, as super-smart computers come up with solutions to every problem facing the world, from the environmental crisis to curing cancer. Some believe that it may be able to make us immortal .

The singularity may not arrive tomorrow, but it may not be that far off earlier. Renowned futurist Ray Kurzweil predicts that it may arrive between 2029 and 2045, based on the current rate of progress being made by AI research and factoring in concepts like Moore's Law.

Others, including Rodney Brooks, founder of iRobot and former head of computer science and AI at MIT, think this is unlikely . Their argument is that the computer power needed to realize a super-human internet is still centuries away.

And some, for example the, psychologist Steve Pinker, doubt it will ever happen, stating that the fact we can conceptualize it happening is no evidence that it ever can or will become a possibility!

?

The Last Mile?

Given the recent advances we’ve seen in AI – most notably the emergence of generative AI tools like ChatGPT and Dall-E 2 – it might seem to many of us that we are not very far from the singularity at all.

But even if this is the case, there are still some very challenging hurdles that need to be overcome before it can become a reality.

It’s true that current systems have capabilities that just a few decades ago would only have existed in science fiction. For example, engaging in conversations, writing poetry, outperforming humans at complex games like Go and accurately describing what they’re able to “see” thanks to computer vision .

But today’s AI is still “narrow” – usually designed for one particular task or set of tasks. A major milestone on the road to super-human intelligence will be the development of general artificial intelligence (AGI) which is capable of taking what it learns about one task and applying it to learn how to do many different tasks, much like we can.

There are also technical challenges remaining. Computational resources far beyond those available today will be needed to train machine learning algorithms that can “think” in as broad a range of ways as human beings can. And humans are also hugely more efficient when it comes to processing data. Machines need huge volumes of information to learn even relatively simple jobs, whereas we can pick up the basics of many tasks simply by watching them being performed once or twice, thanks to our ability to think “generally” and apply a method of thinking that we refer to as "common sense." This comes from the implicit knowledge we have of the world and how it works.

Of course, it’s probably foolish to think that these last-mile hurdles are in any way insurmountable, particularly given what’s been achieved so far. But it’s far from certain that they will be cracked any time soon!

?

So, Should We Be Preparing For The Singularity?

Though the timelines may be uncertain, it seems prudent that we should be making preparations, given the potentially seismic consequences.

One of the most obvious guardrails that should be in place is taking action to ensure that AI will always act in alignment with human values. This will involve making sure that it understands concepts such as respect for the sanctity of life, freedom, tolerance and diversity. Limiting the ability of AI to cause harm through bias, unethical decision-making, or rampant profiteering on the part of businesses or governments that deploy it is an essential step.

At the same time, measures should be put in place to mitigate the societal harm that could be caused due to factors such as job losses to AI. This might involve encouraging companies that are replacing human staff with machines to invest in reskilling and retraining staff for alternative roles and exploring economic policies such as the universal basic income .

While it’s far from certain when, or indeed if, the singularity will occur, the stakes are high enough that we should be treating it as a very real possibility by keeping alert to the risks and ensuring safety, transparency and accountability are central to AI implementation strategies, we have the best chance of ensuring AI evolves in a way that will benefit humanity rather than harm it.



About Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of over 20 books , writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.

He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world. Bernard’s latest book is ‘Generative AI in Practice ’.


Ann M. Murphy, PMP

Business Analyst

3 个月

When AI can think for itself and have free will - That is when we should start to worry.

回复
Syed Izhar Hussain

Director ICT: Digital Transformation: Agile Project Management: ICT Infrastructure: ERP Development and Deployment: Product Development: SAP: ITIL

3 个月

Bernard Marr Unbiased Truth and Justice, will we be able to get it through AI?

回复
Mohammad Taleghani

Associate professor of Industrial Management Department, Rasht Branch, Islamic Azad University(IAU) , Rasht, Iran

3 个月

Humans tend to be superior to AI in contexts and at tasks that require empathy. Human intelligence encompasses the ability to understand and relate to the feelings of fellow humans, a capacity that AI systems struggle to emulate.

回复

Sure, this might happen one day... But before it does… Just imagine all the cool things that are already possible with novel tools like Claude, Salesforce, and Lemon AI…

Gregory Rowe

Technical Advisor, Business Architect, Governance, CTO, Board Member | ServiceNow, BMC, ITIL, Agile, ERP | Collaborating between technical teams and leadership

4 个月

Human to HAL: "Am I unique?"

要查看或添加评论,请登录

社区洞察

其他会员也浏览了