Will Artificial Intelligence Kill Us?
R. Shawn McBride
The Planning Done Right Guy(TM) | Business nerd | Host-Future Done Right(TM) Show | Business Ownership Attorney
In 2016 Microsoft released its “Tay” a chatbot which, within hours, went from being an innocent chatbot to being a racial slur slinging artificial intelligence machine. Observers were shocked to see that what started out as an innocent project to engage in discussion with users turned into a reflection of the worst of humanity.
Why would a neutral facing chatbot, designed with the best of intentions, turn into something that reflects the negative side of our human race? How can so many smart people work so hard to build a piece of code, presumably for the good of humankind, and get unexpected results?
As shocking as “Tay” was this isn't the only time AI has gone in unexpected directions. On a growing number of occasions computer programs designed to be neutral showed unexpected and negative results when released into the real world (we’ll talk about a few of those cases later). It seems that again and again programmers think that they can set the direction of coding and software to control outcomes but fail when the programs are deployed.
So can we build a type of artificial intelligence that super powerful that is safe for humanity? Will artificial intelligence kill human life as we know it?
Let's jump into the important question of whether artificial intelligence will kill us.
NOTE: For purposes of this article we are considering “artificial intelligence” or AI to be computer programs that can self-learn, self-adjust and self-code based on input received from the outside environment, namely human interaction.
1.Some Experts Think So
A growing number of experts have expressed public concern with the release of artificial intelligence and what it will do with humanity. US defense expert Jay Tuck’s TedX talk indicates that he believes artificial intelligence could possibly kill humanity. He specifically cites a number of instances where technology has potentially turned against humanity even in early experiments.
Elon Musk has also been loudly her talking about the dangers of AI. Musk’s concerns are grounded in the fact that AI may be so much smarter than any single human. And it we just don't know what a computer will do with that aggregated data and processing power. In fact, Musk, who is usually anti-regulation, is stating the oversight of artificial intelligence is necessary and artificial intelligence is more dangerous to humankind than nuclear bombs.
Famous physicist Stephen Hawking had similar concerns about AI. He is on record stating that AI could be the best thing for humanity or the worst. Hawking’s concern seemed to be that we just don't know what the computers would do and how they would be implemented.
Yuval Noah Harari, author of the famed Sapiens book and other titles about the evolution of humanity, expresses much concern about the use of technology in the future of humanity throughout his work. He's gone so far as to say that the increasing use of technology will destroy the human species as we know it. Harari is concerned about an AI arms race and competition about who gets AI’s benefits first and how they'll be used. He's also concerned about the societal impact the use of AI and how it may destabilize our global environment.
While not all experts are scared of AI many are showing concern.
2. Computers Mess Up - A Lot
The truth is that as we move closer and closer to a world of artificial intelligence that computers still mess up a lot. Even in this modern age we still have to occasionally reset our laptops or cell phones because coding errors cause them to get locked up or malfunction. And how many of us have phoned into a credit card company or other automated system and faced frustration with the fact that their voice recognition software doesn't really work?
Computer programs are written by humans and they often require a great deal of debugging. As much effort is put into making computer programs go live with minimal issues they still often have to be fixed after they're released.
We can remember “Tay” the racist AI chatbot as just one example of how computers mess up.
Amazon attempted to use AI to screen employees and found that because of historical biases in its databases the AI programming continued those biases forward into new hires.
And Google created a word scoring system that would scores of positive or negative nature of certain phrases. Overtime what was found is it certain phrases which I have one connotation in society were scored differently by the computer. This made it clear that, at least as of 2017, that computers could not understand all the nuances of language and society.
As we move to an AI world there is still a lot of error in our computers and systems that need human intervention.
3. Humans Mess Up Too
Artificial intelligence, in whatever form it will take, will be started by humans. We will write the initial code which will start the system computers writing its own code and finding its direction. And the truth is humans make a lot of errors. Particularly in new and innovative areas.
In the investment area you often hear people saying to “go fast and break things.” It's common for humans in new areas to fail and this is to be encouraged in cases where it leads to innovation. Edison, like the Wright brothers, was celebrated for pushing through failure and finding models that worked to improve our technology.
It seems that we have this human tradition of constant failure and improvement. And it works very well. You'll find lots of self-help articles encouraging people to test things, look at the results, and improve.
But when we're dealing with AI and computers that could shape the future of society a potentially make life-and-death decisions for humans do we really want to mess up? Can we afford trial and error? We can’t hit the reset button. Can we believe in our fellow humans enough to get something right, to set the seeds for the future, when we possibly cannot control the outcome or change what the AI will do once released?
4. It’s Hard to Control What You Don’t Understand
One of the biggest issues with AI is that we cannot control what we don't understand. Quite simply the AI moves very fast with high processing power and runs codes that humans cannot understand.
The complexity of our society has gotten such that there are often transactions and events that no one human understands. There were several people quoted in the financial crisis of 2008 that stated that there are a lot of financial transactions going on, particularly in the area of securitization, that no one person can understand.
Similar things are happening in medicine and other areas of human endeavor. Quite simply with our areas of specialization no one person understands every piece of everything. There's probably no one person who understands every functioning part of a modern car and could recreate it from scratch. Similarly, no one doctor could diagnose every disease.
With inventions like IBM Watson, and other artificial intelligence based systems, we have created computer systems that no one person, or even groups of persons, could understand or control.
A real question arises. Can we control what artificial intelligence will do, particularly when the code starts updating and revising itself, when no human can process at that speed or understand what the artificial intelligence is doing?
And as artificial intelligence evolves it is expected that the gap between human understanding and the computers on understanding will grow. Just as we don't know how a child will evolve and what they'll do in life as they age and learn we don't know how a computer will evolve. We simply can’t predict what an AI will do as it encounters millions, tens of millions or perhaps billions of data points which may change its understanding and direction in the world.
5. Who Polices the Police?
It's famously been asked in civil rights cases, “who polices the police?” Who oversees who is overseeing us? It's a fantastic question when we live in a society. It has very democratic roots in fairness.
It reflects that our political systems are built on these checks and balances. In theory no one person should get so much power that they could wield that power to kill or harm others. Our system of oversight and accountability will deter or punish bad behavior or abuse of power.
But even the proponents of AI acknowledge that AI will become very powerful and all-knowing. They also acknowledged that humans won't understand everything the AI is doing. So who could put checks and balances on the decisions of an AI? Who makes sure that AI is doing the right things?
6. Values - Who Sets Them
From an early age humans are taught values and character from other humans. The idea is that we should understand how we fit together is part of society. Even children should understand what is truly important to our human existence.
As we moved to an AI world, a world where computers will program themselves, it seems that we need to take the time to set values. Computers need to be told what is important. The European Union is already making efforts in this direction with certain regulations.
But who really sets the values of AI systems? Even if we humans set regulations for computers throughout government bodies, will the AI systems fully understand those value systems? How can we make sure they get it right?
The bigger question becomes what happens once an AI system is up and running for some time. What happens if the AI starts to question the values that was initially programmed with in light of new learning and understanding? Who determines what is most important for an AI?
An age-old theoretical question is what happens if AI becomes very knowledgeable and sees a problem with human society. So what happens if an AI gets a complete understanding of humanity and our resources and decides that a reduction of population is the correct path forward for humanity?
Certainly the AI would have the tools and power to carry this out by using drones or other automated military equipment. But who lives and who dies? And how? Who decides that?
Setting and monitoring the moral values of AI is a real concern.
7. We Aren’t Having the Hard Conversations
Speaking of values, what's important, and what the goals are human society are, it is important to note that we're not having truly hard conversations. Much of our political system and society today has evolved to allow us to avoid hard conversations with one another.
Much of our political gridlock and ineffectiveness is based on creating stalemates that allow our society to keep moving forward. This is simply the best system that we've come up with to date. If we can't reach an answer get to some middle ground we just stop dealing with the issue via gridlock and disagreement. As simplistic is it sounds that's how we evolved as a society.
However this won’t work in an AI world. The AI is more uniform and more powerful than anything we've ever had in society before. Holding inconsistent positions, simultaneously, is at fundamental odds with how machines and computers work. And we, as humans, are not having conversations about what AI should do, what it shouldn’t do and how it should handle tough situations. We're not also not talking about the oversight of AI.
Meanwhile, while we're not having these hard conversations AI continues to move itself forward. More and more experiments are launched and a computer processing and programming becomes more and more powerful. We move closer and closer to a viable AI.
There's a real risk that at some point a programmer is going to launch an AI system that could potentially overtake the world. This is the exponential learning, growth and control type of AI that Elon Musk seems to fear. We don't know when and where that will start, but a truly efficient AI system could take over financial markets and our money and thereby take much power over our world.
If we don't have the hard conversations about what the values of the AI should be and what and how AI should be regulated we simply don't know what it will do.
Conclusion
The question posed for this article was “will artificial intelligence kill us?” We've examined a lot of pieces that are pushing on the AI puzzle today. And there seems to be one common theme: we don't have much control over artificial intelligence. History shows it spirals out of control.
Up until now we've often allowed new technology and do things to come into our society and dealt with the consequences later by updating laws and regulations. This has worked fairly well in that any injury or danger from new products or services has usually been localized to a small sector of our total population.
AI promises a new era; an artificial intelligence world computers can do much more than they've ever done before. The impact of computers will be bigger. This includes any potential harm from AI systems.
It's my opinion that we simply don't know what AI will do or where it will go. This is scary. We don't know how AI will push on our human systems. A fully deployed artificial intelligence world may lead to computers telling humans how to live their lives based on insights that computers have that we don't have. Additionally, those computers may have to make fundamental decisions about human society such as sustainable population and access to resources like water, land and healthcare.
The truth is we just don't know what AI is going to do. We know it's powerful. We know it has the potential to do massive things. But like everything it needs to be bounded. But we humans are not doing the work right now to set the boundaries of the potential future artificial intelligence world.
Will it kills us? Probably in one way or another.
By: The Our Shawn McBride who is constantly studying the Future of Business as the host of The Future Done Right(TM) Show. If you want regular content on the future of business subscribe to get new blog posts from us here.
Do you really want to make plans that work?
If you really want to get deep into making great business plans, make sure you get my FREE guide “Planning In Light of A Changing Future” by clicking here.
Resources:
Future Done Right? YouTube Channel - Check out this YouTube Channel for interviews and discussions about the future of business.
FREE preview copy of Business Blunders! - This is my first book and it looks at common business mistakes that I’ve seen in my years of working with business owners. This will allow you to avoid those issues in your business!
And a little Thank You?
If you like this article can you leave a clap for it on Medium.com?
This was originally posted at the PlanningDoneRight.com blog. You can subscribe to get new blog posts from us here.
NOTE: This article may have affiliate links where we get a small commission if you purchase an item mentioned.