Cybersecurity will Improve with Predictive Models
Matthew Rosenquist
CISO at Mercury Risk. - Formerly Intel Corp, Cybersecurity Strategist, Board Advisor, Keynote Speaker, 190k followers
Prediction capabilities can have tremendous value in the world of security. It allows for better allocation of resources. Instead of trying to defend everything from all types of attacks, it allows a smarter positioning of preventative, detective, and responsive investments to intersect where the attacks are likely to occur.
There is a natural progression in security maturity. First, organizations invest in preventative measures to stop the impacts of attacks. Quickly they realize not all attacks are being stopped, so they invest in detective mechanisms to identify when an attack successfully bypasses the preventative controls. Armed with alerts of incursions, response capabilities must be established to quickly interdict to minimize the losses and guide the environment back to a normal state of operation. All these resources are important but must potentially cover a vast electronic and human ecosystem. It simply becomes too large to demand every square inch be equally protected, updated, monitored and made recoverable. The amount of resources would be untenable. The epic failure of the Maginot Line is a great historic example of ineffective overspending.
Prioritization is what is needed to properly align security resources to where they are the most advantageous. Part of the process is to understand which assets are valuable, but also which are being targeted. As it turns out, the best strategy is not about protecting everything from every possible attack. Rather it is focusing on protecting those important resources which are most likely to be attacked. This is where predictive modeling comes into play. It is all part of a strategic cybersecurity capability.
“He who defends everything, defends nothing” - Fredrick the Great
In short, being able to predict where the most likely attacks will occur, provides an advantage in the allocation of security resources for the maximum effect. The right predictive model can be a force-multiplier in adversarial confrontations. Many organizations are designed around the venerable Prevent/Detect/Recover model (or something similar). The descriptions get changed a bit over the years, but the premise remains the same as a 3-part introspective defensive structure. However, the very best organizations apply analytics and intelligence to include specific aspects of attacker’s methods and objectives for Predictive capabilities. This completes the circular process with a continuous feedback loop to help optimize all the other areas. Without it, Prevention attempts to block all possible attacks. Detection and Response struggle to do the same for the entirety of their domains. It is just not efficient, therefore not sustainable over time. With good Predictive capabilities, Prevention can focus on the most likely or riskiest attacks. Same for Detection and Response. Overall, it aligns the security posture to best resist the threats it faces.
There are many different types of predictive models. Some are actuary-learning models, baseline-anomaly analysis, and my favorite is threat intelligence. One is not uniformly better than the others. Each have strengths and weaknesses. The real world has thousands of years of experience with such models. The practice has been applied to warfare, politics, insurance, and a multitude of other areas. Strategists have great use for such capabilities in understanding the best path forward in a shifting environment.
Actuary learning models are heavily used in the insurance industry, with prediction based upon historical averages of events. Baseline anomaly analysis is leveraged in technology, research, and finance fields to identify outliers in expected performance and time-to-failure. Threat agent intelligence, knowing your adversary, is strongly applied in warfare and adversarial situations where an intelligent attacker exists. The digital security industry is just coming into a state of awareness where they see the potential and value. Historically, such models suffered from a lack of data quantity and timeliness. The digital world has both in abundance. So much in fact, the quantity is a problem to manage. But computer security has a different challenge, in the rapid advances of technology which leads to a staggering diversity in the avenues of which the attackers can exploit. Environmental stability is a key success-criteria attribute to the accuracy of all such models. It becomes very difficult to maintain a comprehensive analysis in a chaotic environment where very little remains consistent. This is where the power of computing can help offset the complications and apply these concepts to the benefit of cybersecurity.
There is a reality which must first be addressed. Predictive systems are best suited for environments which already have established a solid infrastructure and baseline capabilities. The maturity state of most organizations have not yet evolved to a condition where an investment in predictive analytics is right for them. You can’t run before you walk. Many companies are still struggling with the basics of security and good hygiene (understanding their environment, closing the big attack vectors/vulnerabilities, effective training, regulatory compliance, data management, metrics, etc.). For them, it is better to establish the basics before venturing into enhancement techniques. But for those who are more advanced, capable, and stable, the next logical step may be to optimize the use of their security resources with predictive insights. Although a small number of companies are ready and some are travelling down this path, I think over time, Managed Security Service Provider’s will lead the broader charge for wide-spread and cross-vertical market adoption. MSSP’s are in a great position to both establish the basics and implement predictive models across the breadth of their clients.
When it comes to building and configuring predictive threat tools, which tap into vast amounts of data, many hold to the belief that data scientists should be leading the programs to understand and locate obscure but relevant indicators leading to threats. I disagree. Data scientists are important in manipulating data and programming the design for search parameters, but they are not experts in understanding what is meaningful and what the systems should be looking for. As such, they tend to get mired in the correlation-causation circular assumptions. What can emerge are trends which are statistically interesting, yet do not actually have relevance or are in some cases misleading. As an example, most law enforcement do NOT use the data correlation methods for crime prediction as it can lead to ‘profiling’ and then self-fulfilling prophecies. The models they use are carefully defined by crime experts, not the data scientists. Non-experts simply lack the knowledge of what to look for and why it might be important. It is really the experienced security/law-enforcement professional which knows what to consider and therefore should lead the configuration aspects of the design. With security expert’s insights and the data scientist’s ability to manipulate data, the right analytical search structures can be established. So it must be a partnership between those who know what to look for (expert) and those who can manipulate the tools to find it (data scientist).
Expert systems can be tremendously valuable, but also a huge sink of time and resources. Most successful models do their best when analyzing simple environments with a reasonable number of factors and a high degree of overall stability. The models for international politics, asymmetric warfare attacks, serial killer profiling, etc. are far from precise. But the value of being able to predict computer security issues is incredibly valuable and appears attainable. Although much work and learning has still yet to be accomplished, the data and processing is there to support the exercise. I think the cybersecurity domain might be a very good environment for such systems to eventually thrive to deliver better risk management, at scale for lower cost, and improve the overall experience of their beneficiaries.
Twitter: @Matt_Rosenquist
CEO and Founder at Cloud Raxak Inc.
9 年When humans in the security chain do it, we call it (racial) profiling-- and we have seen the limits of that. We need to be careful to avoid those issues in our automated systems.