A Common but Failing Security Framework

A Common but Failing Security Framework

In How to Measure Anything in Cybersecurity Risk authors Douglas W. Hubbard and Richard Seiersen take issue with the heat map or risk matrix shown in the graphic above although, as they point out, the framework is supported in one form or another by NIST, ISO, MITRE.org and OWASP.  At its best it gives a rough way of dividing risks into 3 groups. 

Part of the critique is grounded in the limitations of the 4 basic measurement types of Nominal, Ordinal, Interval, and Ratio scales. Nominal scales offer means of making comparisons but do not support analytics beyond weak observations of correlation: i.e.  this group tends to like XYZ more than this other group without resolving whether XYZ tends to cause membership in the first group or if membership tends to cause the favor.

Notice that the map includes Ordinal ranking of the descriptors. The inclusion of “1”, “2”,..”5” would be better represented as “1st”,”2nd”,…”5th” to avoid the implied confusion that “4-Critical” is perhaps twice as impacting as “2-Minor” or that “4-Likely” is twice as likely as “2-Seldom”.  In truth the descriptors are as highly subjective as the smiley to crying faces on a hospital pain scale.

But perhaps risk evaluators are encouraged to think of the 1 to 5 ordinals as %-band intervals with 1 equivalent to “0% up to 20%” and 2 is equivalent to “20% up to 40%”, etc.

I characterize this confusion as a “Ratio Fallacy”.  One is tempted to apprise more granularity to the chart by multiplying the Impact score by the Likelihood score to show risk ranges from 1 to 25 instead of simply “Low”, “Medium”, and “High”. One could imagine a CISO using these numbers to allocate a security budget according to the rankings.

Unfortunately, the scoring of “Low”, “Medium” and “High” is still a Nominal measurement.  “Low” is a group with 6 paired Ordinal ranks.  “High” is a group of similar size of paired Ordinal ranks. The rest are “Medium”.

This is the crux of criticism of the model. If instead evaluators were asked to evaluate Likelihood so that 0% indicated “It will not happen” and 100% indicated “It will certainly happen” and to evaluate Impact as $0 cost up to a high of the Total Value of the Company then statistical analytics could be applied to the risk assessments. Furthermore, if the evaluators also provided their confidence level in each Likelihood/Impact metric then the model would provide a means of rating the reliability of the evaluators over time. Thankfully Hubbard and Seiersen offer Excel-based tools for just such modelling.

Bill Holmberg

One machine can do the work of fifty ordinary people. No machine can do the work of one extraordinary person.

5 年

Id like a link to those tools :)

要查看或添加评论,请登录

社区洞察

其他会员也浏览了