A Quantified Approach to Cybersecurity Risk Management
Steve King, CISM, CISSP
Cybersecurity Marketing and Education Leader | CISM, Direct-to-Human Marketing, CyberTheory
Cybersecurity Risk Assessment should be a hot topic these days. How else can you not only convince your board and management team that you need to do something to protect against cyber-attacks, but also be able to communicate for once in a language they understand?
What if Equifax knew that their risk was quantified at more than $1.4B (as in Billion) along with a high probability of an ugly event actually happening. Do you think they might have installed that patch? I think they probably would have. Because as things stand right now (just over two years later) they might not be in business by the third anniversary.
Cybersecurity Risk assessment is used to answer three questions:
1. What can go wrong?
2. What is the probability?
3. How much money is at risk?
There are lots of risk frameworks around that can help answer the first two questions, but there are none that can answer the third.
According to ISO27005, information security risk assessment (ISRA) is “the overall process of risk identification, risk analysis and risk evaluation”. ISRA provides a complete framework of assessing the risk levels of information security assets and is widely used by risk advisors to implement security controls by following information security standards and regulations.
The ISRA risk analysis component is divided into three categories: quantitative, qualitative and synthetic.
Their quantitative approach constructs complicated mathematical models to try and create metered results, but it is based on difficult to collect historical data to support the models and since the risk landscape changes daily now, historical data is not particularly useful in determining risk.
It does not have a way to reflect actual threat data operating in your environment 5 minutes ago. A view that might have been useful to Equifax.
Their qualitative method collects data based on experts' opinions or questionnaires which is easy to gather but entirely subjective. Measuring the Equifax risk in this manner might not have even resulted in a “high” let alone “critical” degree of risk. Which may in fact be exactly what happened there.
Synthetic risk analysis methods can arguably overcome some of the limitations of traditional quantitative and qualitative approaches by applying fuzzy and Analytic Hierarchy Process (AHP) theory, which at least provides a decision-making model. Unfortunately, the design of synthetic risk models can only use attributes of general information security risks and cannot process specific threats like cyber-attacks. Moreover, the risk scores rendered through the model lack any association with dollar value and are usually presented as an asset risk level of 1 to 5, with an overall aggregated risk score of 1 to 100. All of which are meaningless to a Board used to making risk decisions based on dollars and cents.
Additionally, these subjective synthetic scores are useless for cross-company or cross-industry comparisons.
A much better approach would be to use Value-at-risk (VaR) as a foundation. Classical financial risk models like VaR seek a worst-case loss over a specific time horizon. VaR considers the actual dollar values of the assets at risk and when factored by active threat can present a measurable impact of Cybersecurity risk at the very moment of calculation.
The actual dollar value of an information asset is easily determined, though will in part be derived through subjective analysis. For example, the customer PII held by Equifax has a dollar value determined by the cost of replacing the lost data as well as the churn, which is the number of customers lost due to the breach. Ponemon provides studies showing that the companies with data breaches that involved less than 10,000 records spent an average of $4.5 million to resolve the breach, while companies with a loss or theft of more than 50,000 records spent $10.3 million, etc. These values can be usefully applied.
They also have to be factored with forensic and investigative activities; assessment and audit services; crisis team management; and the post data breach costs which include the cost to notify victims of the breach, help desk activities, inbound communications, special investigative activities, remediation, legal expenses, product discounts, identity protection services, regulatory interventions, compliance failures, the cost of Cybersecurity consultants and the cost of resolving lawsuits. This last category in the case of Equifax may be the heaviest straw of all, as we now have over 400 individual class action suits filed and pending.
Factoring in the threat activity may increase or decrease the risk value.
As an example of a very real risk scenario, a well-secured credit card database server reveals low vulnerability under examination by network monitoring systems while a minimally secured clerical support server registers a high level of vulnerability probes. Conventional SIEM platforms that use rules-based engines to evaluate syslog data would alert to the vulnerability on the exposed clerical server. The information assets processed through the credit card server are risk-valued at $20 million (the costs as defined above) while the information assets processed through the clerical server are valued at zero (as they are largely word documents and spreadsheets).
It is obvious to the SIEM that the clerical server is at risk, but because the SIEM makes no contextual correlation with the value of the assets processed or residing on each server, it will ignore the credit card server because it is treated by the SIEM as simply a network asset with equal value.
Additionally, the SIEM will fail to recognize that the clerical server provides a path to the credit card server and thus creates substantially increased risk for the high value server even though that device is not registering attack-related activity.
A SIEM alert here will not address the actual threat to the asset at risk, and consequently the management of the company and those directly responsible for the assets will remain unaware that their overall Cyber-risk has increased dramatically.
The risk-engine that I suggest as an alternative, could be easily constructed from a combination of the VaR of each and every information asset (the portfolio) factored by the aggregated and correlated threat data that is active in the IT environment at every moment throughout the day. All of that data exists today in most enterprise environments. That data can be easily collected and processed in real-time so the VaR can be continually updated to reflect actual conditions on the ground, and the risk-engine could automatically assess the worst-case loss of that portfolio due to a breach. And all could be processed in real-time.
Resulting alerts could be sent to the CFO in quantitative terms identifying the specific assets at risk, the identity of the specific computing infrastructure upon which the assets reside, the exact nature of the threat and the precise dollar value of the assets at risk. While the same information could be sent to the security and recovery teams in real-time but translated into terms that are useful for their function, like database, system, threat, network node and server names and identities.
By assessing risk in actual dollar value combined with real threat data, responsible custodians of an organizations’ risk would have a no-nonsense basis for making decisions about their Cybersecurity investments and improving their defense systems while transferring appropriate portions of that risk through increased Cyber-insurance.
Either way, the IT executive who has been used to asking for $1 million to reduce her risk from “high” to “medium” could now substitute real money for those risk level differentials and instead be asking for $1 million to reduce risk “by $10 million” and she would also be able to actually prove it.
Accomplished Cybersecurity GRC Leader, Industry Expert & Advisor (Banking & Finance), Cybersecurity Researcher & Mentor
4 年Good article. Assigning a dollar value (VaR) to information at the record level is always a challenge. Perhaps, leaning on an information security classification of data elements for Confidentiality, Integrity and Availability (CIA) could be a good starting point. Higher the score among these three information security objectives, higher could be the assumed VaR values. The trick lies in the model / algorithm used to translate qualitative cyber risk values into financial quantitative values. For example, what CIA score should be tagged to a customer credit information record contained in a document repository, as against the CIA score tagged to an employee payroll information record contained in a payroll application? Thus, information categorization is expected to play an important role in arriving at the information's VaR.?
vCISO
4 年How do you assign VaR to an asset in a containerized, infrastructure-as-code environment? Many data assets are "hidden" or wrapped within applications and databases themselves: getting to monitor access (watching so-called audit/access logs) is a challenge on its own. Your article is thought provoking indeed!
vCISO
4 年VaR is a very nice idea, however "All of that data exists today in most enterprise environments" - that is a wild assumption: I have not seen a CMDB which is THAT detailed and up-to-date.. Not to criticize, but to point out yet again the fact that security people can come up with brilliant ideas all they want, but data custodians and IT always have the last word.. unless the IT platform itself had cyber risk controls embedded in its core and the interface is sophisticated enough to encourage users to provide correct value data which on itself have to be calculated. You can at least try to tag the assets by criticality, however quantitative value would have to include infrastructure and data flow dependencies.. infrastructure components themselves might have significant value on it own due to the amount of dependencies from valuable assets. Another missing link is the integration between SIEM and CMDB. Threat Intel is being fed into modern SIEMs for helping to recognize attacks but not for risk calculation.. To summarize, your article allows to identify crucial gaps in technology and approaches it is just overly optimistic about addressing them.
Automate Risk & Compliance ? lets talk ?
4 年The challenge though we see is to collect all the information assets and classify them according to their importance and importantly doing this at scale !?
Foreseeing the unhackable future/ architect operational zero trust 2012 /risk management /proactive / Supply Chain Protection (NIS2, DORA,CRA ) | BI architect | Security Innovator/
4 年Like the approach want to add a important factor.Though the risk is staffed -you first need a breach, then you need a C&C server. What if your design has been hardened? Also do you host the data on prem or like equifax on a third party server that had that in the wonderful clown ( uh cloud) environment with little to no protection as you wrote. Cut the snake by the head not by the tail. And that boils down to good aim and connectivity and interface hardening. Do you think a customer gets to say how this needs to be done on hosted servers, no so that data is and will be at risk as almost all Americans experienced by getting their PII exposed.For Europe it's not too late #time4achange though Facebook threw again a nice data leakage punch at their subscribers.... 133 million records on U.S.-based Facebook users, 18 million records of users in the U.K., and more than 50 million records on users in Vietnam.” And just now again Surprise: Facebook just leaked data from more than 267 million US users- https://bgr.com/2019/12/20/facebook-leak-267-million-users-data-exposed-researcher/