Risk has Three dimensions, not Two. Now what?
FREE!

Risk has Three dimensions, not Two. Now what?

My last LinkedIn article promoted the idea of thinking in terms of multiple dimensions instead of “flat” graphs when we try to discuss the most fundamental features of a potentially complex system. My point in that article was that these attributes fit nicely within the idea of a “sample space”. My concept is that if we can make this switch, we quickly set ourselves up to adopt a much richer picture of the way that many different systems actually work. This leads in a very simple way to a much more thorough understanding of them.?


Less than a week later, my LinkedIn colleague, Mark F. Witcher gave me a new example of this truth, one that both folds in perfectly with this idea and at the same time taught me something very fundamental about a richer and more useful approach to studying and discussing Risk.


For quite some time, we have all been stuck with a pretty simple two-factor model for how to discuss or assess the size and significance of any risk. According to this very popular meme, we just need to consider two attributes, the Probability of any "bad" event, and its Costs if it indeed actually occurs.?


Mark has been steadily preaching a slightly more sophisticated approach, and one to which I have finally awakened. Let me lift it right from his LI post:


“As stated in?#ICHQ9, most people believe?#risk?has two attributes – severity and probability of occurrence.?#ISO31000?states that risk is the effect of uncertainty on objectives. While both are true, they are overly simplistic to the point of being misleading. Risk really has three attributes that look like two. The three attributes of a risk are severity of impact, probability of the impact occurring (likelihood), and evaluator uncertainty.”


Superb! To my eyes, under this older approach we would have represented these two attributes as two dimensions. Assessing their interaction in order to produce some manageable risk metric means simultaneously considering both attributes to produce that metric. For example, if some “bad” event has a probability of 1% (1 out of 100 chances) per year and a cost of 100,000 somethings IF it does occur this year, then we can “amortize” that risk at 1,000 somethings occurs in every year that we do business.


I know that I have probably already managed to offend large swaths of the very competent risk professionals by oversimplifying and for that I am sorry. I can only offer a single excuse. I mention this example only because Mark clearly picks out a third, very fundamental attribute: “evaluator uncertainty”.


If we have already switched over to a “sample space” framework, considering Mark’s third attribute should cause us no additional headaches whatsoever.


But first, we should ask ourselves whether this additional attribute is really widespread and maybe even universal? Because if it isn’t, tossing it into the middle of a risk assessment will only complicate and muddy a fairly straightforward risk estimate.


On the contrary, the “evaluator uncertainty” component is so widespread that it is its very universality that causes us to sometime miss this attribute altogether. I am very indebted to Mark for picturing this factor in exactly this way, because I finally understand what he has been saying for some time now. The problem that Mark’s (and me because I certainly agree with him) wrestling with is that “evaluator uncertainty” is so universal that it inevitably turns up in situations in which for completely different reasons, we feel free to ignore it.?


As I have stated elsewhere many times, this “evaluator uncertainty” is a natural outcome of any and all measurement and exists across a continuum as a thing that we can or should sometimes ignore. When we make a right-hand turn in our car on a city street, we can never have a?perfect?idea of how close we are to the first object on the right-hand side of the car. Instead, we have a practical driving approach that almost always allows us to ignore this uncertainty. It’s always there, but were we to decide to combat it practically, we would have to stop the car, then get out, and then make some kind of measurement with a traceable device that we had loaded into the car in anticipation of this turn. These steps would 1) only encapsulate but not eliminate our “uncertainty” and 2) quite possibly cause a “rear-ender”, and 3) definitely slow down our voyage.


When can we be safe in ignoring “evaluator uncertainty” and when are we well advised to pay very careful attention to it? Mark cannot possibly answer that question, nor can any other risk professional, nor can the smartest “AI” algorithm. It’s a waste of time for any of us humans to try to wiggle out of that responsibility. Even an infinite number of dimensions will not help us dodge this problem. It only offers us the necessary flexibility we will need when we try to represent and evaluate the landscape within which we must find and absorb all risks.



Thanks once again to Mark F. Witcher, Ph. D! Check him out on LinkedIn. And speaking of risks, Microsoft WORD (TM) claims that there is no risk (0%) of you reading any passive voice in this text.

Dr Patrick Druggan

Helping others build a better future - faster

3 年

why three dimensions? P and U are additive. They total to 1.

回复
Saket Yeotikar

GxP | Validations | Risk Management | Process Automation | Compliance | 21 CFR Part 11 | 21 CFR Part 820 (QSR) | ISO 13485

3 年

Interesting facts..

Stephen Puryear

"Came to Believe"

3 年

When we consider the added Risk imposed by any and all "observers", may I propose that we divide this risk in two parts? "Bias" is an intrinsic piece that we always see with any human participation. But "observing" also drags in some objective error elements, too. This is because measurement assumes the use of a measurement standards. All uses of this or any other standard imports another set of errors. If we have already analyzed the characteristics of the likely errors of these standards, then I propose that we label these as "objective errors" rather than biases. This has been a very eye-opening discussion for me, and I am grateful to the other participants for helping me inch myself forward!

Dave Bonds

Question everything

3 年

I, too, was struck by that post, and yours. One common source of error I've encountered is quantization error. Some risk management tools I've seen have a two dimensional matrix, and five degrees each of probability and impact. A move of one click less is often enough to change the assessment from red to yellow, or yellow to green, and the result is used to decide what actions are warranted. This gives rise to avoidance bias. When the observer is the action item owner, there is often pressure to discount probability or impact, since risks deemed high generally require reporting and periodic statusing up the chain.

Theo Hafkenscheid

Air quality monitoring/QAQC/Metrology/Humour/Mental health

3 年

Does this relate in any way with "observer bias"?

要查看或添加评论,请登录

Stephen Puryear的更多文章

社区洞察

其他会员也浏览了