"Shall We Play a Game?"
J. Oliver Glasgow
Founding Partner, Global Information Rights Executive, 2006 Time Magazine Person of the Year, TEDx Speaker
"Shall we play a game?"
That was Joshua, a fictional A.I. character from the movie, War Games, in 1983. Joshua’s alter-ego, his day job, as it were, was WOPR, a mainframe strategic nuclear control system used to conduct global thermonuclear war.
The difference between Joshua and WOPR was the implementation of policy.
Before policy was applied, Joshua was just the dream-child of Dr. Stephen Falken. Nothing more than a mere tool for understanding various game strategies. It was the rules of U.S. nuclear deterrence policy that shaped Joshua into WOPR.
That’s the way it usually works—the advent of tools followed by the subsequent development of their rules. First we had nukes, and then we made policy.
Since then, there really hasn’t been a new policy. That’s because there really hasn’t been a new tool—except, that is, for the one we pretty much missed in 1983.
Ironically, while the movie, War Games, was produced to make transparent the policy realities of nukes, it actually introduced us to a totally new, potentially more lethal tool—cyber-attacks. And while the movie spent much of its arch peeling back the morality of the former, it pretty much ignored the latter, the sins of the fictional hacker, David Lightman.
Which is exactly what U.S. policy has done for the tool of hacking since that time…ignored it. While various U.S. departments have developed some hacking approaches, there’s really no overall policy to speak of that sets forth rules.
In a word, none.
To be clear, the act of hacking is, in itself, amoral. Think of Victor Frankenstein vs. J. Craig Venter (the scientist who led the mapping of the human genome), they were both hackers. They each studied an encapsulated, black box of knowledge, functionally tore down what they found, and applied what they learned into some other manifestation. One was clearly bad, but not because he was a hacker.
Cyber-attacking is the Frankenstein’s monster of digital hacking. It not only tears down a system but also harms people, and we have no centralized policy addressing it.
Could you imagine if the U.S. had developed the nuclear bomb, and for the next 35 years while other countries followed suit, the U.S. developed no strategic nuclear policy? That would be both crazy and scary! Strategic nuclear weapons are powerful. Hacking is arguably more powerful. Imposing sanctions in response to either form of attack would be pretty lame.
“We have this tool, but we have no rules!” – It would be just stupid.
Yet, that’s precisely what we’ve done with hacking. In the decades since the tool’s emergence, while other countries, and worse—any cause that wanted to, have certainly followed suit, we have developed no strategic hacking policy to speak of.
To begin to address this ridiculous oversite, a serious policy needs to be developed, including:
· a general description of deterrence theory
· a historical review of hacking evolution by eras
· a discussion of the future of deterrence
· historical examples of deterrence successes and failures
· a review of significant contributors to the study of cyber-attack policy
A white paper must needs be crafted from the West Wing to enunciate the “what and why” of these points, and should be answered by a green paper, most likely from the DHS addressing the “how.” Until that happens, we don’t even know what deterrence might begin to look like.
You can’t develop a thing unless you can at least describe the thing.
Strategies and forces providing the capacity for a credible response must be developed in the event of direct military assault. This doesn’t just happen by itself. There must be a planned response that is both credible and effective—period. Without it, deterrence would be a scam. That’s likely why we’ve already been hacked at least 15 notable times in the past nine years.
War-hacking capability is the essential element in cyber-attack deterrence. There must be a credible option for a military response. In this way, hacking policy would lead to a peace-maintaining capability. Warfighting capability is deterrence. But you won’t have strategic capability without the policy to develop it and the rules to control it.
In the world of software development, yesterday’s flat text files are today’s Data Lakes. Old things become new (or is that “re-use?”). Any serious policy must capture and consider the eras of the tool’s evolution.
Likewise, serious policy must look boldly into the future. There’s not just one map of threats (technical and non-technical), there’s a freakin’ atlas with scores of maps! Policy would bring the steely necessity to gaze out past the ends of the flat world and be willing to peer into the briny mist to spy the sea monsters that lie beyond. Cyber threats are always future, not easily deterred without proof of counter capability.
Policy can also drive the review of significant contributors. Will our policy reach out to captains of chaos to harness their harassing spirits, or will it recruit and develop new talent? We’ll never know if we don’t have a policy!
Let’s go ahead and admit we have a problem. That’s the first step, after all. We have developed tools potentially more lethal than anything else prior, we are improving those tools every day, and we pretty much don’t have any rules in place at all.
Spending more money to update our defenses isn’t the same thing as developing a policy.
By the way, one of the first things we may want to add to our shiny new cyber-attack policy, once we get one, is to get better at cyber-attacking. We need to be developing an entire range of weaponized hacks. Think how diverse we are with our nuclear strategy—MX, Polaris, silos, Patriot, and Star Wars. We need to be just as adept with our ability to cyber-strike, and we will likely need to convincingly demonstrate that power like we did in the early days of the nuclear race.
To do that…we need policy.