First Principles
Aristotle wrote of first principles (translated from Greek of course): In every systematic inquiry (methodos) where there are first principles, or causes, or elements, knowledge and science result from acquiring knowledge of these; for we think we know something just in case we acquire knowledge of the primary causes, the primary first principles, all the way to the elements. (from Physics)
What are these in systems security engineering? Of, if you prefer (as I usually do) what are these for security in systems engineering?
My "hypotheses"? Keep reading.
Previously on ...
To review, last week (First Principle - well, not yet | LinkedIn) the article looked at what the fundamental problem, or flipped, primary desired outcome or result, of systems security - bottom line: expectation that a system maximizes the time it operates without leading to a state in which there is an unacceptable loss of assets?(positive statement) or expectation that?a system minimizes states in which there is an unacceptable loss of assets?(negative form). RICHARD Massey commented with a nice alternative: system efficiently maximizes the time it operates without leading to a state in which there is an unacceptable loss of assets.
Like that add of "efficiently".
Four First Principles
In unpacking what it takes to achieve this and using first principles thinking to push back to a more axiomatic set of statements, my thought is there are four. Arguably, this is two, and until recently I would have said two, but I think we have to account for the human/user and for the fact system security is always competing for resources along with all other objectives for the system.
First of these - complete access mediation. Ideally, everything that happens, happens for an authorized purpose. Every breach, every system loss involving an adversary, traces to an unauthorized use or access - or what should have been one not permitted.
Second - system control. The system perform only intended actions with intended behaviors and intended outcomes. The system needs to be engineered to its purpose so it does only what is intended through the use of passive (preferred) and active features.
Third - commensuration. Debating what to call this, maybe it is "efficiency" or "proportionality" or another. This could be said as being the right amount of security or any number of ways. The principle here is that efforts and resources spent need to be commensurate with the consequences of loss and its effect. In NIST SP 800-160 Volume 1 Revision 1 Appendix E - the design principles - you see this as a theme behind a number of the principles - commensurate rigor, commensurate protection, commensurate response, commensurate trustworthiness, ... .
Finally - usability. This has many facets - the security should be non-interferring and non-invasive to the extent practicable (or "commensurate"?). The security approach or model should be intuitive to the human users, moreover, the secure intended use of the system should be the natural one or feel natural to the user, and for the security aware user, fit the mental model of what the user thinks of as secure use which can be complemented by user training and documentation that helps guide the user. It should not have the user looking for workarounds that (unintentionally?) defeat the security so the user can improve their efficiency (e.g., should not prompt them to put their password on a sticky next to the keyboard).
closing thoughts
To keep it short, I kept this bottom line. So if there are questions to the why of something, ask away, and short answers I'll respond, longer answers may be addressed in future articles. Or addressed in my book I expect to be out in 2024.
And of course, if you think there is something that doesn't trace to these four - something missing, comment. If you think these four collapse to fewer, comment - this is a thing that fewer is better, one ring to rule them all in a term for Tolkien fans.
Unless otherwise stated, all views expressed are mine and don’t necessarily reflect those of my employer or MITRE sponsors.
Value creation = f(People, Training, Process, Tool, Data). I think in systems and future effects, applying competencies @ INCOSE & Cummins Inc.
1 年Mark W., The left-most options in the chart (about unacceptable behaviors) are not as impactful (to me as an individual) as the loss of assets or unauthorized outcome, however I appreciate the completeness of showing ‘stick’ along with ‘carrot’ approaches. When looking at modifying /guiding human behavior - what have you seen that works best in getting to the desired outcomes of a safe, secure, etc. system - through each stage of the dust-to-dust lifecycle?
Thinking systems, designing systems
1 年Where's the image taken from?
Defending high value targets against disruptive cyber attacks - SABSA TOGAF CEH GCED GRTP ISO27k ISO22k EnCase CISM CGEIT Lean MoR
1 年Very nice way of putting it Mark. It reminds me some of the concepts of failure mode and fault analysis techniques in Lean 6Sigma (https://asq.org/quality-resources/fmea). It’d be great to have an example of a class of assets (e.g. enterprise directories, network security assets etc) and show these conditions in your diagram. Just a thought.
Lead Information Security Systems Engineer / Deputy Product Security CoE Lead: L3Harris Space and Airborne Systems
1 年Very insightful!