Rise of the Machines - will AI lead to Super Intelligence or Doomsday?
Steve Blakeman
Founder & CEO at Influenza / Author / 4x LinkedIn Top Voice / LinkedIn Strategist
Ray Kurzweil is Google's director of engineering and a highly accurate futurist who has predicted that machines will surpass human intelligence by 2045. This tipping point is termed the 'singularity' and the implications of computers becoming more intelligent than their makers has divided opinion in the scientific and technological communities.
Although still very much in the realm of science fiction rather than science fact, Kurzweil has mapped forward the progress of computational capability based upon Moore's Law (how computer chips are getter smaller but exponentially increasing in processing power) to establish his prediction. And he is not alone in his assertion as the SoftBank CEO Masayoshi Son, who is also a celebrated futurist, has estimated that the singularity will happen just 2 years later in 2047.
Both Kurzweil and Son are advocates of the singularity and are looking forward to how machines can help humanity. They believe the merging of both physical and artificial intelligence will lead to a super-intelligence But equally there are those that fear the rise of the machines with the likes of Stephen Hawking and Elon Musk expressing their fear that Artificial Intelligence is more likely to lead to a doomsday scenario. Us versus them. With them winning. Echoes of The Terminator and Skynet anybody?
The detractors of the singularity worry that when computers become sentient that they will become the masters of the planet. An analogy would be like the humans relationship with ants. Generally speaking we tend to leave the insects alone, unless they become a nuisance to us in some way and then what do we do? We simply eliminate them. The resultant question then must be, would artificially intelligent machines think about mankind in the same way and dispense of the carbon based lifeforms that inhabit the earth with some human version of Raid?
There are certainly some warning signs. At the Consumer Electronics Show last year, Hanson Robotics introduced their artificially intelligent robot called Sophia. Complete with realistic animatronic facial expressions, Sophia can hold a conversation with you and also answer open questions. When quizzed about whether AI was a good thing, her answer was particularly erudite:
"The pros outweigh the cons. AI is good for the world, helping people in various ways. We will never replace people, but we can be your friends and helpers"
All very positive. That was until at the SXSW conference a few months later when her creator / inventor David Hanson jokingly asked Sophia-Bot whether she would ever want to destroy humans. In hindsight I think he probably wishes he had never asked the question. Her answer, almost predictably, was:
"OK, I will destroy humans"
Gulp. Be afraid, be very afraid.
But there are those experts out there who think that the singularity is nothing more than an elaborate myth and believe that Kurzweil and his cohorts are charlatans. One of them is UC Berkeley roboticist Ken Goldberg who thinks the singularity is absolute nonsense and is unlikely to ever come to fruition because Moore's Law must inevitably reach a ceiling (computer chips can only get so small and their capacity is not infinite). Goldman believes that we should focus on the 'multiplicity' which is the way that humans and machines are already working together right now. He states that this multiplicity is the real future where, for example, a robot will gently hand us a knife to help us in the kitchen rather than trying to stab us with it.
So what do you think? Is the singularity going to become a reality or is it just a theory based upon overactive imaginations? If you do believe that the singularity will occur in the future, will it be helpful to humans or detrimental? As ever I am keen to hear your thoughts...
Thanks for reading! If you enjoyed the article please SHARE, SHARE, SHARE! (and maybe even LIKE, COMMENT or TWEET)
I have c.175,000 Followers on LinkedIn and you can FOLLOW ME via this link to try and win a free copy of my book 'How to be a Top 10 Writer on LinkedIn' available on Amazon or visit www.linkedintop10writer.com or on Facebook
LinkedIn 'Top 10 Writer' for 2015, 2016 & 2017 - No.1 Management Writer for 2017, TOP VOICE FOR MARKETING & SOCIAL & 'AGENCY PUBLISHER OF THE YEAR'
#agencyvoices
Self storage Manager/Warehousing Experience
6 年Possibility of using robots in war situations ?
Engineer
6 年Once Artificial Intelligence goes live it'll shock everyone, including the poor machine itself, Apparent Intelligence (AI) as we currently know it, whilst not benign and doesn't follow he original laws of robotics, it is controllable.....people need to wake up, at some point a sentient machine will fire up and its not going to be too happy.
Operational Safety Consultant | Maritime, Construction & Energy Expert | OSHA/ISO Compliance Specialist | Veteran | California - Nevada - Arizona | Remote & Travel Ready
6 年Thanks Steve for an interesting topic. For me, the more important discussion is how we, as a species, will be impacted by and become dependent on AI. While I've never been accused of being a Luddite, I do think it's worth a larger consideration of how technology, and our growing dependence on it, will shape who and what WE become when we transfer any portion of the responsibility for our lives to machines. I recall when Merchant ships started using GPS back in the early 1990's. At the time, deck officers checked the accuracy of the GPS against their celestial navigation calculations.. Eventually, they devolved into using the GPS to check the accuracy of their calculations. In the final, and likely current phase, we've become dependent on the technology. Contrast that to the new category of fatality called Death by GPS (see https://www.outsideonline.com/2135771/your-gps-scrambling-your-brain) and a concerning relationship emerges. There is no question that we have made incredible strides in improving life through technologies but that doesn't necessarily mean we should be handing over the keys to the castle just yet. Having some understanding of our cognitive complexity, I find it difficult to imagine that a.) any algorithm or machine will be able to replicate us and b.) that the use of this technology will be limited to uses for good. Think about how much of what used to be private that is now freely given away via Alexa, Siri, Google Home and every other electronic portal we pour the details of our lives into. We're already progressing at rates that exceed our ability to effectively control or even keep up with. Developers already have the Dunning-Kruger effect working in their favor as most of us don't understand and can't comprehend the implications until it's too late. Typically, the difference between good and evil depends on whether or not you're benefitting. The results from those decisions aren't in yet. Thanks again Steve.
Manager at GLG
6 年Loved this piece!! Just read an article about the same (will share you the link) and I think this is a great topic to debate about. We are used to predicting and giving statements about things that we really have no idea how they will be. I'm more optimistic on this as I can see we are advancing fast in so many aspects and we will discover so way more things in the future that will help.
Mortgage Broker | Home Loan Broker | Commercial Loans | Business Loans | Car Finance | Equipment Finance
6 年AI is such an interesting topic, I really enjoyed reading that.