Is Cyber Risk Quantification (CRQ) Baloney?
Cybersecurity practitioners and risk managers are increasingly adopting Cyber Risk Quantification (CRQ) to assign financial values to risks, aiding decision-making. However, with the growing reliance on complex formulas, assumptions, and tools, CRQ is not without controversy.
Applying Carl Sagan’s “Baloney Detection Kit” — a set of cognitive tools for distinguishing sound reasoning from fallacy — we assess if CRQ is indeed valuable, or if it risks becoming another example of pseudo-precision in cybersecurity.
Let’s dive into Sagan’s framework and apply his principles step-by-step to CRQ.
Carl Sagan’s Baloney Detection Kit: A Brief Overview
Sagan’s Baloney Detection Kit consists of principles for evaluating arguments or claims objectively. Some of the most relevant tools from his toolkit include:
With these tools in hand, let’s analyze CRQ.
Applying Sagan’s Baloney Detection Principles to CRQ
1. Independent Confirmation of Facts
CRQ frameworks like FAIR (Factor Analysis of Information Risk) depend on internal and external data to quantify risk. However:
Conclusion: CRQ lacks consistency in independently confirmed data, which may compromise its reliability.
2. Encouraging Multiple Working Hypotheses
Cyber risks are complex, involving various threat vectors, assets, and consequences. CRQ frameworks typically reduce this complexity to a monetary figure representing expected loss.
Conclusion: CRQ may suffer from reductionism by dismissing alternate ways to evaluate risks.
3. Avoiding Reliance on Authority Alone
CRQ tools and models are often presented as objective science. However, organizations may adopt them primarily because industry experts or vendors endorse them.
Conclusion: Heavy reliance on authority could reduce the critical evaluation of CRQ’s claims.
4. Testing Assumptions Rigorously
Many CRQ models depend on assumptions like:
Conclusion: CRQ struggles with untestable assumptions and often lacks scientific rigor.
5. Falsifiability
Falsifiability means that a claim must be testable and capable of being disproved. Can CRQ models be proven wrong?
Conclusion: CRQ faces limited falsifiability, making it hard to assess its true predictive power.
6. Occam’s Razor: Are CRQ Models Overly Complex?
CRQ frameworks often use complex statistical models, making them inaccessible to many decision-makers.
Conclusion: CRQ may violate Occam’s Razor by creating unnecessary complexity that does not improve decision-making.
7. Testing with Reproducible Results
For a model to be scientific, results should be reproducible under similar conditions. However:
Conclusion: CRQ struggles to achieve reproducibility, raising doubts about its scientific reliability.
8. Cautiousness About Logical Fallacies
Several logical fallacies can creep into CRQ:
Conclusion: CRQ is prone to logical fallacies that may distort the real picture of cyber risk.
So, Is Cyber Risk Quantification Baloney?
Based on the application of Carl Sagan’s Baloney Detection Kit, CRQ exhibits some signs of pseudo-precision and over-reliance on assumptions. However, it is not entirely “baloney.” Here’s the balanced takeaway:
Final Verdict
Cyber Risk Quantification is not pure baloney, but it should not be taken as gospel either. Applying Sagan’s principles, CRQ appears useful as one tool among many. However, decision-makers must stay vigilant about its assumptions, limitations, and possible fallacies — keeping a critical mindset to avoid being misled by the illusion of precision.
CRQ works best when balanced with other frameworks and continuously tested and validated. In short, CRQ is only as good as the critical thinking we apply when using it.