Moving to a Trustworthy Cyber Foundation
The Role of Strategy, Security Engineering, and Triage
Recent failures to detect nation-state level cyber-attacks on systems and organizations in the United States have once again, prompted calls for better threat detection, more effective threat information sharing, more sophisticated threat hunters, faster patching, increased automation, more capable vulnerability scanning tools, and the use of artificial intelligence. These are all very good things to do, but they will not solve the larger, more pervasive problem—an enormously complex set of interconnected systems spread across the country including the seventeen sectors of the critical infrastructure—and organizations that are totally dependent on the technology for their mission and business success.
Threat detection, vulnerability scanning, and threat information sharing (at any speed) are necessary components of a well-designed cyber protection strategy, but they are certainly not sufficient. Why? Because the complexity of systems continues to grow exponentially—from the trillions of lines of code in software applications, middleware, and operating systems to the billions of devices and system components that make up those connected systems. That, my friend, is what we call attack surface or the inviting, rich target presented to adversaries.
So what’s the problem? The problem is that in a full-fledged “cyber war” that pits threat and vulnerability hunters against determined adversaries and an ever-expanding attack surface of complex systems—complexity always wins—not just some of the time, but all of the time. The fundamental principles of computer science and complexity theory dictate that no matter how much threat information is gathered and shared, how effective the scanning tools are, how skilled the human defenders are, and how intelligent the AI programs are, the number of unknown vulnerabilities (i.e., zero-day vulnerabilities) in the system will continue to increase and dominate [3]. We know it. The adversaries know it. That’s why they’re winning. Not every time, but often enough that we are having a national conversation (again).
Changing Course, Getting Back on Track
In several recent articles, I provided a perspective on where we currently stand in the nation-wide effort to protect organizational systems and assets [4][5]. I also offered a few ideas on potential new approaches for defending cyberspace in the future [6]. The issues are not trivial, the tasks before us are significant, and current events show, time is not on our side. That said, there is reason to be optimistic. Why? Because we are innovators, we are problem solvers, and we are committed to protecting our country from cyber adversaries. That’s a pretty good foundation from which to launch this grand experiment. So let’s dive in…
Successfully defending the United States against hostile cyber-attacks requires systems that are as trustworthy as necessary based on how much we depend on those systems and how the systems are being used—that is, sufficiently trustworthy for the types of missions and business functions the systems are supporting. In short, we need dependable systems that can be trusted, are trustworthy, where what we expect matches what the systems can deliver. This could be expressed in the following four key and mutually reinforcing elements:
- Adopting a proactive cyber strategy on how to defend critical systems and assets with buy-in from all stakeholders;
- Prioritizing current systems and assets based on criticality and impact analyses;
- Reengineering selected systems based on the established prioritization, security requirements, and defined levels of assurance [7][8];
- Considering “wise use” of technologies based on mission risk.
Before discussing each of these elements in detail, there are several points worth noting. First, there are a very large number of legacy systems across the country (a.k.a., the installed base of systems). Second, the systems, components, and services produced by industry for consumers provide a range of assurance levels or trustworthiness—from little to no assurance to highly trusted and assured. Third, the adoption of systems security engineering practices by systems and software developers has been sporadic [9]. Finally, consumers do not have the information they need to judge the trustworthiness of the myriad of component products they are buying and incorporating into their systems and infrastructures.
Adopting a Proactive Cyber Protection Strategy
Current cyber protection strategies are largely reactive in nature—employing penetration resistance and boundary protection as the principal components of a one-dimensional view of how systems are defended. Behind these initial defenses lies a uniform, brittle, and predictable attack surface on the inside of the systems. After a successful breach of the system perimeter, adversaries have in many situations, unfettered access to continue the attack and do further damage. I discussed this one-dimensional strategy in a recent article, “Winning the Cyber War or Continued Cyber Insanity?” I proposed an alternative strategy that assumes the adversaries at some point, breach the organization’s perimeter defenses and are now operating within the system—in some cases as trusted insiders with extensive and elevated privileges undiscovered, sometimes for lengthy periods of time.
How can we use systems security engineering [10][11] concepts and security architectures to effectively disrupt the attack surface and detect anomalous behaviors in the system with the ultimate objective of limiting the damage that adversaries can do after the initial breach? This can be accomplished with current technologies by detecting and impeding the adversary’s lateral movement and reducing the adversary’s time on target. Concepts such as zero trust, segmentation, micro-segmentation, deception, and strong identity, credential, and access management can increase the adversary’s work factor and reduce the adversary’s freedom of movement within a system. Think of a house with locks on all the doors and windows backed up by vaults and safes in every room providing additional “security domains” that can be designed with varying degrees of assurance. Virtualization and micro-virtualization technologies and techniques can also be employed, effectively “churning” the system components faster than the adversaries can carry out the exploits. Think of rapid component refreshes to known secure states after detecting malicious code, adversary activity, or unexpected or anomalous system behaviors through improved and evolving indicators of compromise. Agile development processes can facilitate rapid and pre-planned component refreshes.
These concepts provide a solid defensive foundation and a “second dimension of protection” on which to layer the extensive hunting, detection, and response capabilities described above. The two dimensions start to look more like the human body—an exterior surface to prevent bacteria and viruses from getting into the body and an immune system that deals with the ones that get through the initial defenses. Ultimately, the first and second dimensions are reinforced with a third dimension based on sound systems security engineering concepts and practices to build systems that are resilient—capable of taking a punch and continuing to support critical missions and business functions, even in a degraded or debilitated state. This requires viewing security as a “foundational property” of the system (similar to reliability, fault tolerance, and safety) and achieving that property through disciplined and structured engineering processes as part of the system development life cycle.
Changing to an “assumption of breach” philosophy should not be interpreted as pulling back from the perimeter, moving away from the cyber fundamentals of “blocking and tackling” or ceding digital territory to the adversaries. Rather, it is controlling the environment of the cyber conflict and the operational tempo during the engagement. The measures of success in this approach are not based solely on whether the defenders keep the adversaries out of the system. They are also based on how well the defenders limit the damage to the system after the initial breach. Preventing data exfiltration, malicious code installation, loss of system integrity, and compromise of mission capability become the true measures of success. In the end, the objective is to achieve mission assurance.
Taking a Lesson from Battlefield Medicine—The Triage
So with a proactive cyber strategy in mind, how do organizations get started? One approach to solving a complex problem is to divide and conquer. There are seldom enough resources, including time, skilled individuals, or funding, to protect every organizational system to the highest level. Prioritization is essential. The prioritization process begins with conducting a criticality analysis. In a dynamic systems environment, criticality analysis may best be determined as a snapshot in time. The first NIST standard related to criticality analysis was Federal Information Processing Standard (FIPS) 199, developed in support of the 2003 FISMA legislation. The security standard required federal agencies to categorize each of their systems as low-impact, moderate-impact, or high-impact—where “impact” was based on the negative consequences (including loss of assets) to organizational missions or business operations if the systems were breached or encountered a failure mode. The adverse impacts ranged from limited to serious to severe and catastrophic. It’s all about risk.
After the 2015 Office of Personnel Management breach, the Department of Homeland Security initiated the High Value Assets Program. This was an attempt to identify those federal systems and assets that required the greatest protection because the impact of loss would have serious or catastrophic effects on mission-essential capabilities, programs, or activities. FIPS 199 impact analyses were used to guide and inform the prioritization of legacy systems deployed across federal agencies. This approach may provide an excellent model to start a concerted effort to reengineer systems that are mission critical—systems that if breached or fail, would have a severe or catastrophic impact on organizational operations and assets, individuals, other organizations, or the Nation. Systems that are part of the United States critical infrastructure could also potentially be considered as high value assets and prime candidates for reengineering from the ground up. It’s certainly something that every component of the critical infrastructure should be looking at based on current challenges.
Taking this approach recognizes the severity of the problem and provides a practical way forward to drive change in those areas that are most important to the economic and national security interests of the Nation. It also provides an opportunity to experiment with systems, create high assurance prototypes in different organizations and in different sectors of the critical infrastructure, and build systems that are in fact, secure-by-design—using the systems security engineering concepts and practices developed over the past four decades. Moreover, it is possible to embrace the concepts of security, assurance, and engineering in the world of agile development—taking advantage of innovation and working at the speed of industry to support customer needs. Speed to market should not always be the primary driving factor for new software. With the right emphasis on validated security as part of the engineering process before products come to market, a new focus for acquisitions could come to light.
Wise Use of Technologies
Complex systems always carry some degree of risk due to the uncertainty inherent in those systems. Technologies that are employed in complex systems have different track records—the newer ones likely have little to none. The unknown risks for the latter should be considered. New technologies can also increase the attack surface of the system, requiring connections to other systems for various purposes. Supply chain provenance information about the origin, production, modifications, and custody of specific products that employ new technologies can be useful is assessing the risk of using such technologies. In the end, organizations should consider if the technology provides advantages that are worth the risk.
Organizations may determine based on their criticality/impact analyses and risk assessments, that certain technologies and system components exceed their tolerance for risk and may place critical or essential missions in jeopardy. Therefore, consideration for the “wise use” of technologies in systems may be an important factor in the decision to find alternative means or methods to accomplish those missions. While the latest technologies may provide the speed, efficiencies, and effectiveness desired by organizations, if there is high likelihood of a successful attack on a particular technology by an adversary without any known or obtainable mitigations, such susceptibility may motivate the selection of an alternative approach to deliver the needed capability. Even with the best engineering practices, the technology may ultimately be too risky. Wise use may be the only mitigation.
Conclusion
Moving from a reactionary mindset in a cyber protection strategy where the primary focus is prevention, detection, and response to a proactive mindset where engineering fundamentals, system resilience, and wise-use considerations are paramount will take strong leadership from the top. It’s forward-thinking, bold, and carries potential risk. But it is a risk worth taking.
[1] R. Ross, "The Adversaries Live in the Cracks"
[2] R. Ross, "How Computer Security Defenses Can Fail"
[3] R. Ross, "Resilient Military Systems and the Advanced Cyber Threat"
[4] R. Ross, "Right Strategy, Wrong Century"
[5] R. Ross, "Fighting the Last War"
[6] R. Ross, "Protecting the Nation’s Critical Assets"
[7] R. Ross, "Time to Get Serious about Assurance"
[8] R. Ross, "System Assurance: A Missing Component to Military Readiness?"
[9] R. Ross, "The Mysterious Disappearance of Systems Security Engineering"
[10] R. Ross, "Rethinking Our View of System Security"
[11] R. Ross, J. Oren, M. McEvelley, NIST SP 800-160, Volume 1, "Systems Security Engineering: Considerations for a Multidisciplinary Approach in the Engineering of Trustworthy Secure Systems"
A special note of thanks to Greg Touhill, Tony Cole, Mark Winstead, Keyaan Williams, and Gary Stoneburner, long-time cybersecurity and SSE colleagues, who graciously reviewed and provided sage advice for this article.
Agile Master, AI/ML/ZTA Public Private Partnership
3 年Spot on. Great work, but this is common for NIST ????????