Welcome Skepticism in an Uncertain World (Part 2)

Welcome Skepticism in an Uncertain World (Part 2)

Foreword

In below article, you will find my review of the second part of the bookThe Failure of Risk Management: Why It’s Broken, and How to Fix It”, by Douglas W. Hubbard, 1st edition (2009), John Wiley and Sons.

Should you have missed it, I reviewed the first part here: Welcome Skepticism in an Uncertain World (Part 1). The third and last part is published here: Welcome Skepticism in an Uncertain World (Part 3).

“Part Two – Why It’s Broken”

Summary

In this bulky second part of his book (more than half of it), Douglas W. Hubbard elaborates further on the failure of Risk Management. He addresses 5 specific challenges that he believes are responsible for its broken state - essentially from a Risk Assessment perspective. Note that he will address 2 additional challenges in the third part of the book, when providing his proposed solution to fix Risk Management (we’ll look at those when reviewing the third part).

The Four Horsemen of Risk Management. In introduction of a demonstration meticulously backed up by scientific research, the author sets the scene by providing a sharp categorisation and detailed description of what represents for him the 4 major actors at stake when discussing Risk Management. These include “(1) the Actuaries, (2) the War Quants, (3) the Economists and (4) the Management consultants”. Anyone who reads the book will probably identify to one or another of these proposed categories, with something to learn from on issues outside of her/his own category.

For the sake of this summary, I stop by the “Management consultants” and “War Quants” categories only, which as we will see later, personify the main points brought forward by Hubbard when it comes to discussing qualitative and quantitative risk methods, respectively.

Management consultants. This is how Hubbard introduces this category: “Most managers and their advisors use more intuitive approaches to risk management that rely heavily on individual experience. They have also developed ‘detailed methodologies’ for these softer methods, especially after the rising influence of managers addressing information technology. Users and developers of these methods are often business managers themselves or nontechnical business analysts.”

The cynical description the author then makes of their approach to Risk Management is at the same time hilarious and thought-provoking (and this is being written by me as a former Management consultant J). Undoubtedly, the fact Hubbard started his career as a then “Big 8” firm explains such a clinical analysis, in a section cleverly named “Management Consulting: How a Power Tie and a Good Pitch Changed Risk Management” (remove the tie, and 10 years on, this whole section may not need any updates). In particular, the 4 pages titled “How to Sell Snake Oil” (pages 71-74) are a must-read for any Management consultant or clients/prospects of Management consultants, regardless of their area of expertise (be it Risk Management or another one – his description is purposely valid for any area).

War Quants. According to Hubbard, it is from this school of thought that there is “the best opportunity for improving risk management”. In this category he positions “Engineers and scientists during World War II, [who] used simulations and set up most decisions as a particular type of mathematical game. Today, their descendents are users of ‘probabilistic risk analysis’, ‘decision analysis’, and ‘operations research’”.

The 5 issues identified by the author as the main contributors to the broken state of Risk Management, addressed in dedicated Chapters (5 to 9), are:

“1. Confusion regarding the concept of risk

2. Completely avoidable human errors in subjective judgments of risk

3. Entirely ineffectual but popular subjective scoring methods

4. Misconceptions that block the use of better, existing methods

5. Recurrent errors in even the most sophisticated methods”

In following sections – headed by the names of Chapters reproduced from the book, I summarise briefly some elements discussed by Hubbard. This is not intended to be exhaustive and therefore is not necessarily reflective of all arguments brought by Hubbard. Should you want to know more, I invite you to refer to the original chapter(s) of the book.

Chapter 5 – An Ivory Tower of Babel: Fixing the Confusion about Risk

Tower of Babel. Hubbard reviews several definitions of risk, while also clarifying the concept of uncertainty, presenting the definitions he uses. As he rightly points out, one may encounter various definitions of risk within an organisation, which of course does not help if we need to clearly communicate about risk. Getting aligned on the terms is therefore critical as a sound foundation of effective risk management.

“If you wish to converse with me, define your terms.”, Voltaire, p. 79

Ivory Tower. Another important aspect the author highlights is that “Part of the problem with risk management, at least in some organisations, has been its rapid growth –mostly in isolation- from already well-developed quantitative methods such as those found in decision analysis”. Nevertheless:

“Just as risk management must be a subset of management in the organisation, risk analysis must be a subset of decision analysis. Decisions cannot be based entirely on risk analysis alone but require an analysis of the potential benefits if managers decide to accept a risk.”, p. 92

Chapter 6 – The Limits of Expert Knowledge: Why We Don’t Know What We Think We know about Uncertainty

If you need a risk to be assessed, you will in most cases need to rely on expert judgement. Unfortunately, as Hubbard argues, although most risk assessment methods use in some way subjective inputs provided by human experts, very few use the necessary precautions to avoid the errors made by these experts.

“…when it comes to risks, managers and experts will routinely assess one risk as “very high” and another as “very low”, without doing any kind of math […] And without deliberate calculations, most people will commit a variety of errors when assessing risks.”, p. 99

Despite extensive research made since the 1970s (made by Kahneman -ultimately awarded Nobel Prize of Economics in 2002-, Tversky and others) on issues made by humans, due to heuristics or cognitive bias, failing to use this research’s output in Risk Management will result in errors.

Such errors are caused among others by miscalculations, human’s limited ability to recall relevant data when assessing risk and uncertainty, or overconfidence in predicting results.

“There seems to be no way to conduct legitimate risk management practices without considering the psychology of risk and uncertainty”, p. 115

Based on an abundant number of concrete and interesting examples of the limits of expert knowledge, Hubbard introduces the concept of calibration, his proposal to solve these issues (further developed in the third part of the book).

Chapter 7 – Worse Than Useless: The Most Popular Risk Assessment Method and Why It Doesn’t Work

In this Chapter, Hubbard addresses what could be viewed for their users as the most controversial idea of his book: the “most popular” risk assessment methods (what is known as risk matrix or heat map, usually presented in the form of a 3X3 or 4X4 matrix, 5X5 for those presented as “the most sophisticated”[sic]) promoted by management consultants or organisations developing international standards -and their clients- (he mentions standards such as NIST, CobIT or PMBoK) are flawed, and may actually do more harm than good.

YES, you read correctly: Hubbard targets here the famous Likelihood x Impact scales (some of you may indifferently use Probability x Severity) that lots of you are using or being referred to, when it comes assess risk scores, and prioritize the then “assessed” risks on the basis of these scores.

“Simple scoring methods in no way alleviate the fundamental problem of limited information. But the added ambiguity makes you less aware of it.”, p. 124

Although being written in 2009, and despite the emergence of an increasing number of rebels in the wake of Hubbard, there is much reason to think that these methods (that he refers to as “risk scoring” methods) still remain the most commonly used methods today.

However, the author identifies 3 main problems with these methods:

1.      Absence of consideration for limits in human judgement (because development in isolation from research in this area)

2.      Likelihood or risks is assessed differently by different people

3.      The use of scales in itself adds 3 types of unintended issues which he explains in details (range compression, presumption of regular intervals, presumption of independence)

I’ll provide here a summary of the 3rd issue only, as 1 and 2 as I find those to be more self-explanatory (please refer to the book if not).

Use of scales in scoring methods may result in following effects:

Range compression. Referring to the NIST scale, he mentions the example that “A risk of 1% chance of losing $100 million would then be given the same ranking as an 18% chance of losing $250 million [ ] …in the NIST framework, both of these – with a low likelihood and high impact- would be considered a ‘medium’ risk.” The best description he has heard about of this effect according to Hubbard is the following:

“Garbage times garbage is garbage squared”, Reed Augliere, IT Security Expert, p. 131

Presumption of regular intervals. Scales using values such as 0, 1, 2, 3 and so are based on an assumption (which he proves wrong by way of an example using scoring of impacts): the numbers at least roughly approximate the relative magnitudes of these items. 2 is not twice as important as 1, and 3 not exactly three times as important as 1. By oversimplifying reality, use of such scales might lead to risks not being assessed as they should and be given the relevant priority (and therefore, response).

Presumption of independence. Correlations among different risks or factors are ignored in risk scoring methods, overlooking the fact that risks actually are interdependent. For example, 2 risks, if happening together, might need to be assessed a much higher risk than if they were assessed in isolation. 

Chapter 8 – Black Swans, Red Herrings, and Invisible Dragons: Overcoming Conceptual Obstacles to Improved Risk Management

Although his solution will be detailed in the third and last part of the book, Hubbard unveils further was he thinks is the approach to solve the issues he raised so far, claiming that “…the ideal approach is a version of quantitative modelling of risks.”

Well, this does not come as a surprise at this moment of the book, but he thinks (and we can only agree with him) it is important to also face and address the arguments made by opponents or skeptic regarding quantitative risk analysis.

I’ll voluntarily ignore here a section called “A Note about Black Swans” in which he specifically addresses critics made by Nassim Nicholas Taleb on the use of risk analysis (pages 151-158 if you can’t wait). Since I’ll review later this year Taleb’s books, I’ll have the opportunity to come back on Taleb’s views in a more detail fashion. I’ll also skip here the distinction made between frequentists and subjectivists (pages 158-161).

Quantitative risk analysis is impossible! To people claiming there is simply no way to measure risks (generally people who make their living as “risk experts”, as he ironically notes), Hubbard deconstructs the lines of reasoning using examples of arguments he heard, identifying the fallacies in these arguments. In a nutshell, his main point is that one should apply the same standards to all types of methods, and therefore apply same reasoning for qualitative methods than for quantitative ones.

Taking the examples of past disasters which are claimed as evidences that quantitative risk methods don’t work, Hubbard answers:

“Unlike the quantitative methods that came later, subjective, intuitive methods always have been available and, by this standard, these disasters are as much evidence against qualitative methods as against quantitative methods.”, p. 150

Quantitative risk analysis is not for us. Hubbard also answers to the “we’re special” argument, often used by people claiming their environment is so special, so unique or complex, that quantitative methods can be applied. The points he elaborates on are reiterated from his first book “How to Measure Anything”, extended with the last one:

-         “Whatever your measurement problem is, it’s been done before

-         You have more data than you think

-         You need less data than you think

-         Getting more data (by direct observation) is more economical than you think

-         You probably need completely different data than you think.”

“When people say they don’t have enough data, have they actually determined how much they need and what can be inferred from the data they have (no matter how little it is)? Did they actually determine –or even consider- the cost of getting new data based on new observations?”, p.162  

Chapter 9 – When Even the Quants Go Wrong: Common and Fundamental Errors in Quantitative Models

The demonstration would have been too easy, or biased, without looking at some errors that are also made in quantitative risk assessment methods. Quantitative models might well be the ideal approach for Hubbard, but not when applied indifferently, without some precautions. The same rigor, as the scientific he is writes, should be applied when using these methods. His point is of course that it’s not because a model is built on quantitative risk methods that it’s necessarily good.

He introduces quantitative Monte Carlo-based simulations, enabling the assessment of risk in probabilistic terms, on the basis of various scenarios. Hubbard then highlights (at the time of publication of the book) the “virtually non-existent”…“broad scholarly work in investigating how Monte Carlo simulations are used.”

 “There is more research in how people make impulse purchases at a checkout counter than in how users of Monte Carlo models assess critical risks of major organisations.”, p. 174

These however present a number of issues that the availability of these simulations to a wide range of users–among others- has created. By way of concrete examples, Hubbard details each of them and propose ways of taking them into consideration. Issues when using quantitative risk analysis include, among other things: a lack of calibration, a lack of back-testing and testing of the quality of models, a lack of empiricism, the use of irrelevant distributions, a trend to use sophisticated methods for low-level risks and vice-versa, or to ignore correlations, and to paradoxically exclude the biggest risks from risk analysis.

My Highlights

If I have not lost you yet at this stage of this article (there are so many interesting aspects to stress in Hubbard’s book that I failed to make shorter in above summary), let me try to keep you until the end of this article. In following section, I’ll just provide you with a few quick reflections. These would indubitably need more development on their own, but I will keep them short for now and invite you to provide your comments, should you want some particular point to be detailed further.

Risk Management or Decision Improvement?

What if the main reason why Risk Management has failed wouldn’t just be a failure of the business to accept a challenge of their decisions? As rightly pointed by Hubbard, an effective risk management finds its foundations in clearly defining, understanding and communicating about risks. In some organisations, the disconnection between risks and decisions is so big that people in risk management departments actually do not know or are voluntarily not involved in the organisation’s strategy and associated decisions, and hence have little added value to offer to the improvement of decisions. If this is the case, how can they be taken seriously by the business? On another hand, if an organisation’s strategy is not known from people in the organisation (including in risk management), isn’t it too simplistic to designate that the issue is solely located in the risk management department? How comes that the whole organisation is not familiar with its strategy? Collaboration is key, and since terminology is important, one may wonder whether Risk Management discipline wouldn’t gain for renaming itself (provided that they then act in this capacity) in “Decision Improvement”. It finally all comes down to the fact that “as risk management must be a subset of management in the organisation, risk analysis must be a subset of decision analysis”.

Lazy Expertise?

Risk Managers, as “experts of the uncertain”, should be skeptic about the methods they apply, the inputs they take for granted. At clients I have worked for, I have heard from people who are seen as risk experts within the organisation (across the 3 lines) answers such as “we have always done this way”, “we have no capacity to do better”, “sometime one needs to do compromises”, when I challenged them about the relevance of some particular methods, decisions, or what I assessed an enormous time spent on little added-value activities. On another hand, the Senior Management they support, should -as I stressed already in my first article-, play an active role in challenging the effectiveness of its risk management community.

Positioning Monte Carlo on the (Heat) Map

If you ask a Risk Manager who has only used a risk matrix/heat map in her/his professional life what Monte Carlo is, my bet is that some of them will think of casino. If they don't, they may think of rally. But then it would probably not be in what would be, in the current discussion, in its most useful definition (the motor racing definition put aside): “Rally – a demonstration; an event where people gather together to protest for or against a given cause”. As Hubbard wrote it as of the first pages of his book, skepticism is healthy. The fact that in some organisations, so few people have been taking the necessary time to challenge the validity of the methods they have blindly been using for so much time, and to search for an alternative, is worrying. I plead guilty for the 2nd part (search for an alternative), since as I had mentioned in my first article, I have only discovered 2 years ago this existence of this 10-years old book, although I have never really felt at ease with the heat map and the way it is used within many organisations. But here we are. And I’m looking forward to discovering more of the method proposed by Hubbard. I will share about it shortly in the next article.

Comments?

The above are my own summary and 3 highlights I wanted to make on the second part of the book. If you have read the book and want to provide your view or correct/complete this with some important aspects I would have overlooked, please share it in the Comments.

Note: my review of the third part of the book will be published in an upcoming article, Welcome Skepticism in an Uncertain World (Part 3).

Disclaimer

The views expressed here are my own and do not necessarily reflect the views of past employers, clients or organisations. These reviews (of books I have announced here) include my summary and key elements I decided to highlight. These are necessary influenced by my experience (working for clients mostly in financial services). I invite you to read the book should you want to make your own opinion.

Karin Vinke

Co-inventor Integrated Threat Intelligence, Architect (Enterprise, Solutions, Process (BPM) and Security), I&AM (a.o CyberArk), Privacy (CIPM), Security (CISSP exam ready), BCM, Sustainability.

5 年

' It finally all comes down to the fact that “as risk management must be a subset of management in the organisation, risk analysis must be a subset of decision analysis”. Nice! Great article.

Rei Shinozuka, CFA?, FRM?

Director of Mortgage Acquisition at Federal Home Loan Bank of New York

6 年

Big fan of Doug Hubbard?'s work!

Hans L?ss?e

Take chances - intelligently

6 年

Great review - If I hadn't already, I'd would surely be buying and reading the book. As is, I'll await the next edition which Douglas Hubbard hinted was in process. That said ... Risk Management failed because those doing it were looking in the wrong place (details and non-compliance) for the wrong thing (risks) - rather than lookign at how to enhance decision processes and decsion quality. A lot of what is being done today makes perfect sense (currency hedging, IT secutity measures, insurance programs, etc) - but to be relevant and to truly add value to the business - risk managers need to ask themselves "What do we need to do to make this strategy/project/decision/planning meet the targets set out and hence be successful despite something not moving the way we expected it to.". We also need to "kill" the notion of risk aversity - and move to intelligent risk taking.

要查看或添加评论,请登录

Nicolas Renard的更多文章

  • Top 10 operational risks for 2020 - What are yours?

    Top 10 operational risks for 2020 - What are yours?

    It is that time of the year again where top 10 operational risks of the financial services industry, as chosen by…

    2 条评论
  • Decision-Making Excellence

    Decision-Making Excellence

    Foreword This article is the third of a series in which I share my review of a list of 12 Risk Management books that I…

    8 条评论
  • Positive Risk Management

    Positive Risk Management

    Foreword This article is the second of a series in which I share my review of a list of 12 Risk Management books that I…

    6 条评论
  • Welcome Skepticism in an Uncertain World (Part 3)

    Welcome Skepticism in an Uncertain World (Part 3)

    Foreword In below article, you will find my review of the third and last part of the book “The Failure of Risk…

    3 条评论
  • Welcome Skepticism in an Uncertain World (Part 1)

    Welcome Skepticism in an Uncertain World (Part 1)

    Foreword This article is the first of a series in which I intend to share my review of a list of 12 Risk Management…

    16 条评论
  • Happy Objectives-Reaching Year!

    Happy Objectives-Reaching Year!

    It’s officially 2019 and while a big part of the professional crowd present on LinkedIn will get back to work tomorrow…

    11 条评论

社区洞察

其他会员也浏览了