Three Ways Artificial Intelligence Could Be the End of Us — And What Will Be Required For This Not to Be Our Fate
[This article is one of a series of extended pieces from the Institute for Creative Development that apply Creative Systems Theory and the concept of Cultural Maturity to critical issues. It and related pieces can be found in podcast form at www.lookingtothefuture.net and on the Institute for Creative Development blog at www.culturalmaturityblog.net.]
Machine learning should provide great benefits in times to come. But a growing number of prominent figures—of particular note Stephen Hawking and Elon Musk—have warned of potentially dire consequences. I think the risks are very real. Unless we can learn to think about the potential dangers in more sophisticated ways, some of the most intriguing innovations of our time may very well prove to be our undoing.
This brief article addresses two topics that are key if we are to avoid this result. It looks first at multiple mechanisms through which such harm could occur. (We can only address what we can recognize.) It then turns to what will be necessary if major harm is to be avoided. I will argue that the needed antidote to cataclysm lies in more deeply understanding the fundamental differences between human intelligence and what today we are calling “artificial intelligence.”
Cataclysmic Scenarios
The power of machine learning—for both good and ill—comes from the immense quantities of data modern computers can process combined with the ability of AI algorithms to learn as they go. There are also a couple of further characteristics that directly contribute to the dangers. Often human operators don’t have access to the underlying learning processes with machine learning (and commonly could not understand them if they did—they are just too multifaceted). And once a machine learning process is started up, it will follow the goal assigned to it unswervingly, whatever the consequences (unless specific safeguards are built into the instructions).
I think of three mechanisms through which machine learning could have truly cataclysmic consequences. The first two we have already encountered, but at a limited scale. The last, the one that would be most difficult to combat, we have not yet faced.
In the first scenario, some kind of bad actor on the world stage wages a AI-based attack on a perceived enemy. The goal could be the destruction of physical infrastructure such as electrical grids and water supplies, disruption of communications networks, or as we have seen attempted in very rudimentary form with Russian interference in elections, a fundamental undermining of social and governmental structures. This kind of attack would not require that the perpetrator be a developed nation— only technical abilities that should become increasingly available would be needed. It could easily be mobilized by a rogue state such as North Korea or even by a terrorist group. Once initiated it could quickly spin out of control. We legitimately include this kind of application when we think of “weapons of mass destruction.” In time, it may prove the most problematical example of such weaponry.
The second scenario is less obvious in its destructiveness, but it is where currently we see the greatest harm. Increasingly today, our electronic devices are designed to capture our attention. pretty much whatever it takes to do so. And AI plays an increasing role in how they accomplish this. As a psychiatrist, I consider device addiction one of today’s most pressing concerns. In previous articles, I’ve described how the mechanisms with device addiction are essentially the same as those that produce the attraction of addicting drugs. Our devices create artificial stimulation that substitutes for the body feedback that would normally tell us that something matters. (See “Techno-Brilliance, Techno-Stupidity, and the Dangers of Techno-Utopian Craziness—Needed Perspective If Our Future Is To Be Anything We Can Celebrate”). Machine learning has the potential to compound those mechanisms many times over—support the creation of ever more powerful digital designer drugs with increasingly destructive results. AI’s role in producing a world in which distraction and addiction more and more replace meaningful human activity could very well be the way it ultimately contributes most directly to our undoing.
The third kind of scenario is what people in the tech world most often point toward when they warn that AI could be the end of us. Systems applying machine learning could very well come to out compete us. It is easy to make the goal of a machine learning algorithm simply to have the mechanism propagate itself. Such algorithms can be single minded in their competitiveness in ways that we humans will never be, and would never want to be. (In spite of often thinking of ourselves as competitive in a simplistic Darwinian fight-for-survival sense, we—thankfully—are more complex than just this). People often express fear that AI will eventually be more intelligent than we are. As I will clarify shortly, this is not the concern. In limited ways machine learning systems are already more “intelligent” than human systems and will only get more so (ultimately a good thing if they are to serve us). The danger lies with the possibility of an algorithm having the single minded goal of self-propagation (a goal that could be programed in or be a secondary product of a kind of digital “natural selection.”) The fact that learning with such systems can take place autonomously and is often beyond our ability to decipher much less control means that we face the real risk of runaway mechanisms where the destruction of humanity, if not an outright intent, becomes an unintended consequence.
The Antidote
These are scary possibilities. And as I have suggested, they are not just possibilities. In their beginning manifestations, they are realities we already live with. It could easily seem that there is nothing we can do. If the machines want to take over, eventually they will. This is the conclusion that many people in the tech world aware enough to be concerned seem to be reaching.
This very well may be our fate, but I don’t think it needs to be. The missing piece is the simple recognition that machine learning and human intelligence have very different mechanisms. In fact, they aren’t that related at all. As a psychiatrist—someone who works every day with intelligence’s multilayered complexities—this conclusion is so obvious it hardly needs stating. You will note that I do my best to avoid the term “intelligence” when talking about machine learning. I do this because it is so important that we not lose sight of this distinction. The key to AI not being the end of us lies in appreciating the fundamental ways machine learning and human intelligence are different.
In a moment I will get to why this kind of distinction is so commonly missed—particularly amongst those who most need to solidly grasp it. But we can start with a particularly provocative observation that follows from those differences and is critical to how human intelligence can serve as a buffer to potential dangers. I’ve described how machine learning is single-minded in pursuing its goal. There is an important sense in which human intelligence is not just more complex in all it considers, it is by its nature moral.
That may seem like a radical claim given how frequently we are not at all moral in our everyday dealings. And given how often history reveals acts of which we should not at all be proud, it might seem preposterous. But most often in our daily lives we act with basic kindness. And when we look at history’s big-picture, we find humanity bringing ever greater complexity to its moral discernments.
This last observation has pertinence for these reflections both because it highlights the inherently moral nature of human intelligence and because it invites the intriguing question of whether new moral capacities might become possible in the future. In early tribal societies, much that we would today consider totally unacceptable was common—for example slavery, human sacrifice, and even cannibalism. The early rise of civilizations brought more formal philosophical ponderings, but as we saw with the early Greeks and Egyptians, at least slavery continued as a common practice. The appearance of monotheism brought more overt attention to moral concerns, but the resulting moral absolutism often left us still far from what we would find acceptable in our time. While the Middle Ages gave us the Magna Carta, it also gave us the barbarism of the Crusades and the Spanish Inquisition with thousands of people executed simply for their beliefs. In our Modern Age we’ve witnessed important further steps, with, for example, the Bill of Rights and its stated freedoms not just for religion but for speech more generally. And with the last century, we’ve seen essential further advances—for example, with the civil rights and woman’s movements, and more recently with advocacy for the rights of people with differing sexual orientations.
None of this is to suggest that we are always moral in our actions. Commonly we are far from it. My point is simply that there is clearly something in what it means to be human which is allied not just with advantage, but with a larger good. And it is imbedded deeply enough that we can think of the human narrative as a whole as a story of evolving moral capacity. Human intelligence by its nature engages us in questions of value and purpose. We are imperfect beings. But we are also in the end moral beings.
In contrast, as I’ve described, machine learning follows the goals it is programed for. When it comes to addressing the dangers that potentially accompany machine learning, this distinction is critical. Machine learning is a tool, and one with great potential for good. But in contrast with human intelligence, there is nothing in it that makes it inherently good. Put bluntly, AI is not really intelligence, or to the degree it is intelligent, it mimics only one very narrow and limited kind of intelligence (and as I will get to shortly, even that imperfectly).
In a previous article, I pointed out that the Turing test—the current gold standard for determining AI’s success at replicating human intelligence—is insufficient, and ultimately bad science. (The test proposes that if a computer responds to your questions and you can’t tell it is a computer, then artificial intelligence has been achieved). Imagine that a person constructs a bright red toy sports car made out of candy, perhaps using 3D printing to capture every detail. From a distance, you can’t tell it from a real car. Now such a toy might be fun and useful for many things, even amazing things. But the fact that you mistake it for a real car does not make it one. If we apply such misguided thinking to AI, we inevitably reache unsound, and potentially highly dangerous, conclusions.
In our time, we easily miss the critical difference between machine learning and human intelligence. Indeed, because we so readily idealize the technological (in effect make it our god) we can get things turned around completely. Caught in techno-utopian bliss, we can make machine learning what we celebrate. We do so at our peril. Our well-being as a species depends on remembering that our tools are just that, tools. Our ultimate task as toolmakers is to be sure that we use our evermore amazing tools not just intelligently, but wisely. That starts with being able to clearly distinguish ourselves and our tools. Machine learning will provide a particularly defining test of this essential ability, one on which our very survival may depend.
We are left with two more specific questions that need answers if my claim that the nature of human intelligence provides an antidote to potential dangers is to be more than just wishful thinking. First we need to better understand just what it is that makes human intelligence moral, just how its mechanisms differ from what we see with machine learning. Second we need to make sense of how the perspective necessary to get beyond prevailing techno-utopian assumptions and think with the needed greater maturity and complexity might be possible. To answer these questions, we need the more detailed formulations of Creative Systems Theory.
What makes human intelligence different—and in that difference moral? CST proposes that the key lies in the fact that human intelligence is multiple. There is the kind of rational processing done by our prefrontal cortex of which we, particularly in our time, take appropriate pride. But there is also the intelligence of our emotions that organizes much in human relationship and human motivation. In addition, there is the intelligence of imagination that informs the artistic and dreams of both the waking and sleeping sort. And at an even more basic level, we have the intelligence of our bodies, manifesting not just in our erotic impulses but in the workings of our immune systems and even in interactions with the bacteria in our guts. (See Multiple Intelligences for a more detailed look.)
A simple way to think about how machine learning and human intelligence differ starts with the recognition that the mechanisms of AI parallel those of just one aspect of intelligence—rational processing. In fact, AI, even at its best, mimics rational processing only imprecisely. If we look closely at rationality, we see that it functions in ways that are more nuanced and layered than we tend to assume. But the analogy works fine as a point of departure for grasping the basic strengths and limitations of AI.
The idea that we might have machines that can carry out many of the more rational/mechanistic tasks of cognition—and more rapidly than we can—is a great thing. But is important to appreciate that while the rational parts of intelligence serve us powerfully, very few real world questions can be effectively answered with rationality alone. Certain purely technical concerns can be addressed in this way, but not concerns that in any way include values or human relationships—as most any that really matter to us eventually do.
Human intelligence, with its multiple aspects, is simply more complex than can be modeled with a computer. Here I mean complex in a deeper sense than just more complicated. If I were referring only to technical complexity, it would be right to claim that with time we should be able to get all the complexities nailed down. The reality is that human intelligence reflects a wholly different kind of complexity. This kind of complexity not only inherently supports life-affirming decisions, it is precisely what has made human life possible. If we miss this fundamental difference, we will end up making naively misguided, perhaps ultimately suicidal, decisions.
CST addresses what more is involved with how it answers a question that becomes obviously important once we recognize that intelligence has multiple aspects. Just why is human intelligence multifaceted? And more, why is it multifaceted in the specific ways that it is? CST describes how our multiple intelligences work together to drive and support our toolmaking, meaning-making—we could say simply creative—natures. By virtue of our multiple intelligences, we are not just inventive, but also inherently purposeful in our actions. (In an earlier post, The Key to Artificial Intelligence Not Being the End of Us, I examine this sense in which human intelligence is inherently purposeful—and thus inherently moral—from multiple angles.)
To answer the second more specific question—how are we to gain the perspective needed to get beyond techno-utopian assumptions—we need first to address why we should find this kind of confusion in the first place. Thinking of ourselves as nothing more than complex machines began centuries back with the clockworks assumptions of the Age of Reason. Modern Age thought made rationality—the kind of thought that computers model—what it is all about. More recently, this kind of thinking has reached ever greater extremes. Today we find people seriously claiming that all we need do is download our neural contents—like a computer file—and we will have eternal life. When I use the phrase “techno-utopian,” I’m referring to this kind of ultimate extension of the Age of Reason story.
CST documents how such thinking, like the beliefs of the Middle Ages or the more tempered mechanistic assumptions of the Age of Reason, represents not some end point, but just one limited chapter in the evolution of human belief. Within CST, the concept of Cultural Maturity describes a further, more integrative stage made possible by specific cognitive changes. With the “Integrative Meta-perspective” that results, we are able to more fully stand back from all of intelligence’s aspects (including the rational) and also to more deeply engage and draw on the whole of intelligence’s complexity.
When it comes to the possibility of managing AI wisely, this picture includes both good new and bad news. On the good news side, it means that the more complex and sophisticated kind of thinking needed to make the required discernments is not only possible, as potential it is built into who we are. On the bad news side, if the concept of Cultural Maturity is not basically correct there is little justification for hope. And we are only making first baby steps into this new, more mature territory of experience.
Effectively managing machine learning and its effects will depend not just on drawing on the moral nature of human intelligence, but doing so with a new kind of sophistication. We have to ask every step of the way whether a particular application ultimately benefits us, in the end makes us “more.” We also have to be exquisitely sensitive to possible unintended consequences. (It is unintended consequences, rather than malevolence, that are most likely to be our undoing.) Moving forward effectively will require not just applying a moral lens every step of the way, but bringing to bear a maturity—indeed ultimately a wisdom—in making our moral discernments that we are only now becoming capable of.
All of this brings us back to the essential recognition that our tools, however amazing they may be, are only tools. Its is a recognition that becomes obvious—common sense—with Cultural Maturity’s changes. With culturally mature perspective, we no longer confuse ourselves with our tools. And certainly we stop treating our tools as gods. This last recognition becomes particularly critical with AI.
If you came upon a person, who being particularly fond of his hammer put it on an altar and burned incense in its honor, you might find him weird, but let it pass. If, however, the hammer the person worshipped was capable of rising up on its own, hitting the person on the head and killing him—and perhaps kill everyone else in the process—then you would appropriately consider the person dangerous and insane. That is the reality that we face today with AI. Machine learning can serve us richly as we go forward. But it can do so only if we are clear in our understanding of what it is and what it is not.
Psychiatrist, Author, and Futurist
6 年J. Michael. And is your assumption that AI will eventually do anything human intelligence can not more a "faith claim" than anything based on evidence? And yes, all the best in your future inquiries and contributions. cj
Psychiatrist, Author, and Futurist
6 年J. Mitchell -- I suspect "survival of the fittest" in the sense of individuals competing to exist is simplistically conceived—more a reflection of modern age thought than how things actually work. Better as social, meaning-making beings we think in terms of survival of the most creative, or survival of those able to make the most life-affirming/life-supportive choices. From that interpretation, what I suggest holds. (I'll take my chances as far as being hunted down by future AI.)
Psychiatrist, Author, and Futurist
6 年J Mitchell -- I think I made quite clear that I think we as a species are far from enlightened morally. Indeed we are, and always have been, frequently absurd and dangerous in our actions. My point is that if one looks big picture one see a basic feedback that directs us toward actions that are supportive of human purpose and well-being. I see this in a more immediate way every day in my work as a psychiatrist. I don't need to direct a person toward meaningful action. I only need to help them get more deeply in touch with what they care about and meaningful action -- and a more meaningful life -- is the result. Are we up to the level of moral maturity needed to manage emerging technologies such as AI? I think there is a good chance we are not. But I do know that any hope of doing so successfully must start with the recognition that there is in fact a moral challenge to take on.
Our main evolutionary advantage over other species is our ability to share information with each other and consequently work in teams on a level unprecedented by other forms of life on Earth. Humans being forced towards a higher collective moral standard in interacting with each other, as opposed to morality inherent in our intelligence, could just as easily be seen a logical function necessary to work more effectively as a group to reinforce our evolutionary advantage over other species (and teams of humans). Seen from the perspective of the estimated 150-200 forms of life driven to extinction every day (primarily by human activity), humans look like the amoral AI overlords that we worry about encountering one day. I think your argument about the morality of human intelligence is anthropocentric and consequently subjective. Most of your points are about the complexity of "moral" human intelligence being beyond what AI can replicate, so you're really arguing that general AI will never exist, even though it is the goal of many AI research groups (https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/in-the-general-ai-challenge-teams-compete-for-5-million) who know more about AI than we do, and is arguably the end goal of AI development. All of the things you attribute to uniquely human intelligence will be accessible to a robot with general AI if it ever comes to fruition. I worry that indefinitely “othering” machines into a subservient caste based on ambiguous ethical standards and a misunderstanding of what AI is capable of is another danger facing future humanity. While AI infrastructure attacks and weapons systems are definitely more immediate concerns, a future AI slave revolt is also something worth keeping in mind.