End of the Human “Race”?
End of the Human “Race”?
Go to your favorite blog, social site, pick up a newspaper or watch the nightly news, and invariably you’ll see another piece about how an algorithm, a machine, or a robot has taken (or inevitably will take) yet another job.
This is due to the advance of sophisticated technologies such as robotics and artificial intelligence (A.I.). While A.I. is still very much in its infancy, much research and investment is being made in this area.
Recently, a number of industry experts have weighed in with less than favorable predictions for our outcome.
For example, Tesla chief executive Elon Musk, recently warned that artificial intelligence could be our biggest existential threat: “With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out.”
Stephen Hawking warns artificial intelligence could even end mankind: "[A.I.] could take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded.”
Even Bill Gates has apparently said that Humans should be worried about the threat posed by artificial Intelligence. “I am in the camp that is concerned about super intelligence.”
These industry veterans are not alone. Many others have weighed in too.
It is clear that the race against the machine is well underway.
Should we have a right to be concerned or are we simply anthropomorphizing our own beliefs onto technology? Are we projecting fears dreamt up by Hollywood, or are they legitimate?
I do not share these same views, on the contrary, I see a very bright future as machines become ever more “intelligent”.
While the human race is always at risk, I do not believe our demise will be at the hand of intelligent machines. I believe that artificial intelligence is part of the solution, not the problem.
I’m not convicted that out future will turn out as some have suggested. For consideration, let me submit a few thoughts on why the future of A.I. may not be as dire as some predict.
Throughout all of human history, we have been subject to the laws that govern evolution. Significant changes have typically taken millions of years, but as biology and technology become evermore intertwined, the same laws no longer restrict us. Instead, we are now subject to the exponential laws of technology. Technology is enabling humans to do in a few years, what the normal evolutionary process takes millions of years to accomplish.
Humans are crossing the threshold of discovery and are entering an era of self-evolution. We are rapidly learning how to repair, replace, and augment that which defines us.
Machines and technology will continue to grow at exponential rates (this has always been the case by the way), but now so will humans. No longer will we be limited by “slow” biological evolution.
In the coming years, I believe we underestimate how much we will augment ourselves with technology.
It’s reasonable to be skeptical, yet it has already begun. Artificial hearts , artificial retinas, and cochlea implants, to name just a few examples, are becoming freely accepted, and are becoming more and more mainstream. Nascent wearables have begun to usher in a new era of human augmentation, but this is just the beginning.
Over the coming years we will continue to augment and enhance ourselves with more and more sophisticated technology. Capabilities that seem impossible today could be commonplace tomorrow.
It is this augmentation and enhancement that closes the gap between humans and artificially intelligent machines.
I do not believe however that these changes make us less human. Ever since we picked up the first stick to extend our reach, or amplify our strength, we have used tools and technology to augment ourselves. It is in our nature. We have always used technology to improve who we are. Why will the future be any different?
Our future is not one of pure biology; our future is one where technology and biology merge.
If intelligent machines were logical, why would they suddenly decide to annihilate the human race? I don’t think they will. I think this is a human emotion we are projecting onto technology. There is no evidence machines would see us a threat vs. an ally. We are biased by our own beliefs. Many assume that because computers will become intelligent, they will “think” just like we do. We are projecting our own belief system, onto technology. It is a mistake to expect that intelligent machines will think the same way as humans. They may in fact offer new insights and teach us how to think differently, not the other way around.
While I believe the end of the human race won’t be at the hands of humanoid-like robots that suddenly decide that humans should be “deleted,” I’m also not complacent. I understand the risks of technology, and it’s important that we don’t allow our worst fears to become a reality, but excessive fear mongering is counterproductive. Instead, we should focus more time determining how A.I. can improve people’s lives, and they it can help with the numerous challenges we face. There are lots of important problems to solve and A.I. is at the cusp of helping humans address our challenges faster than ever before.
With all the alleged concerns about superhuman intelligence there is zero evidence that our future is pre-ordained. To somewhat ironically borrow a line from the popular movie franchise (Terminator 2), “The future's not set. There's no fate but what we make for ourselves”.
So catch your breath, tighten your shoes, and rehydrate…the race is far from over.
So, what are your thoughts? Are we on a dangerous path, or do you see a future where humans and intelligent machines exist in harmony? Weigh in in the comments.
Sales and Strategy Leader. Mastercard International ServiceNow, Broadridge, Adobe, PayPal Alum Start-Up Mentor and Advisor.
9 年I think as long as we commit as much of our resources and talent into bringing forward all human beings, as we do in AI, I remain optimistic.
Information Technology and Services Professional
9 年I hope not,
Product Leader | GM | Board Member | Ex-Apple, Amazon
9 年Well put David, thanks for sharing!
Mars or Bust
9 年It is most likely that the failure or contextual limitations of AI applications will be the most imminent threat to humanity rather than AI destroying humanity. For example, an AI traffic control system given a particular contextual anomaly such as air space density due to mass flight redirection just may make an optimization decision to minimize lives lost. Let's say a Volcano erupts and the system picked the one scenario that saved the most human lives in the event of two inevitable plane crashes. The issue might be in weighting given specific inputs and layers. This catastrophe would cause societal distress but we have to remember that AI failing due to lack of contextual awareness is fundamentally no different than human political and organizational systems reacting in Space-Time with limited information and outcome simulation capacity. I also opine that the future is not pre-ordained. But we do have to pay attention to probabilities. I appreciate Russ Eisenman 's acknowledgment that skepticism can be healthy but while he still encourages humanity to take a proactive attitude. Great article.