The risk(s) of not understanding risk
Image: Cristofer Maximilian via Unsplash

The risk(s) of not understanding risk

A follow-up piece, in which I reply to some feedback, is available here.


For some reason or another - a more challenging consultancy climate, rising uncertainty, the perpetual chase for the next big word to use, or something else altogether - an increasing number of big name executives and self-proclaimed thought-leaders have recently decided to take upon themselves to make grandiose statements about risk. "In this world of accelerating change", they begin in a statement that is at best debatable and at worst factually incorrect, "the biggest risk is not taking any risks at all". Readers who long have been told that they need to be brave, bold, and prepared to fail to succeed, nod along in concurrence. "100%", "agreed", and "what he/she/they said" are quickly typed and added to the comment fields.

"The world is changing", they add. "Why are you still the same?"

It is largely a straw man, of course. But beyond that, few (if any) appear to have the first clue what risk actually is, and this makes the argument difficult to take seriously for those who do. No matter how headline grabbing the online conversation may be, it thus inevitably leads to little real-world action, and almost none of merit.


So what is risk, then?

Unlike many other terms carelessly thrown about on social media, risk has as close to a universal definition as one may hope of obtain, available via ISO3001. In plain writing, it says that risk is the "effect of uncertainty on objectives". This means that there are three constituent parts: 1) an effect, 2) uncertainty, and 3) objectives.

Both effect and objectives are further explained in notes beneath the paragraph; the former is an impact in the form of a deviation from the expected (whether positive, negative, or both); the latter depends on the context (aspect, categories, and levels). Already we find a significant flaw in the claims made by the talking heads: "risk" can have a positive meaning. If doing nothing is risky, but risk can be a positive, logically there must be some merit to inaction.

Once we start breaking down uncertainty, however, things unravel completely.

As I have explained before, uncertainty is, unlike many other aspects of business, pretty much a constant. There may be more or less of it, naturally, but in any situation where an active choice is required - where the answer is non-obvious - uncertainty will play a part. It will typically take one of three different forms: it may be aleatoric, epistemic, or systemic.

Aleatoric uncertainty arises due to the unpredictable nature of, well, nature; a randomness that is inherent in a dataset and therefore cannot be reduced. The term is derived from the Latin?alea?(dice), referring to a game of chance, which is a good way to think about it. Although there is uncertainty about the outcome, it is fundamentally probabilistic; the variability?may be described by?the probability of each possible value or, for continuous variables, by the probability density function. That is to say, even though we cannot reduce the uncertainty of throwing a dice, we can easily quantify, model, and understand it.

Epistemic uncertainty, meanwhile, comes out of incomplete data or knowledge; it is uncertainty in the model itself. In such a case, very small variations in the starting conditions can lead to very large variations in the final result. The chances of the model accurately predicting the future state of the system falls towards zero the higher the uncertainty is.

The easiest way of differentiating between the two that I have found is to consider how you would describe the parameter or event that you are investigating. If it sometimes has one value and sometimes another - there is randomness - but you know which ones it can be, you are dealing with aleatoric uncertainty. If, on the other hand, the parameter keeps changing and you do not know which value it may hold, it has epistemic uncertainty.

The problem, which I suspect that many of you will already have spotted, is the implications for risk- and strategic management that follow (and we see on a daily basis): the way to understand uncertainty and its related fields such as risk is one of computation. If we are facing aleatoric uncertainty, we know what?can?happen, just not what?will?happen. If we do X, there is a Y probability of Z happening, so we may hedge our bets. If there is epistemic uncertainty, we may perhaps not know what can happen, but we may reduce our ignorance by collecting more data. Either way, with a computer, a few models, and a sufficient data set, we can obtain a pretty clear idea of what the future holds.

But in reality, we have to face a third kind of uncertainty, namely of the aforementioned?systemic variety. It arises out of the knowledge that you do not know that you do not have, and the things that you cannot see -?what is sometimes referred to as the unknown unknowns. In everyday corporate life, this kind of uncertainty habitually takes the form of us believing that we are in one kind of system (usually one in which there is order and traditional risk management methods apply as per above), when in fact we are in another (usually complexity, where they do not). Perhaps you have heard of so-called?Black Swan events, the fat-tail occurrences that surprise everyone even though they are statistically a lot more common than most believe? Well, they are examples of systemic uncertainty.

So, when companies are "not taking enough risks", what exactly are we talking about? What effects - positive, negative, or both - are inferred? What kind of uncertainty are they facing? What kind of objectives are relevant? And, most crucially, is the information available to outside observers or are they merely playing Jeopardy by providing the answers before the questions?


What they actually mean - and the risk inherent in the meaning

The reality, of course, is that practically none of the relevant parties has a clue about any of the above, nor would I imagine that they care. Their audience is not made up by serious executives looking to make serious strategic decisions, but by event organizers who hardly can be asked to know more about a topic than the people they bring in to discuss it. Consequently, what they actually mean is not to take risks but to take chances. Or to put it more accurately, to make bets.

Making the big bet has long been a popular refrain among consultants. "No company makes it big without betting big", they say. "The corporate game favors the bold."

What they neglect to tell you is that while they face an ergodic situation, you (the client) face a non-ergodic one. And the implications are profound.

Without going into details about ergodicity, the easiest explanation (only possible with a bit of rope, admittedly, but hopefully permissible given the present context) is to think of it as manifested repeatability. In the real world, there often exists points of irreversibility that eliminate repeat efforts; potential failures and negative consequences from which one cannot recover. These fundamentally change the game by removing the player and absorbing future gains. To reuse an example from Nassim Nicholas Taleb’s Skin in the Game (2018), a person playing Russian roulette has a five-in-six chance of winning. Does this mean that over 60 games, they would expect to win 50? No, because they would be unable to play any subsequent rounds after the first one that they lost. Global averages do not apply to local failures if they can be catastrophic. A hundred people each falling one meter is not the same thing as one person falling a hundred meters; the consequences are entirely different.

An observer betting on a game of Russian roulette (but not actually partaking in it) may continue to bet even if an individual player loses so long as they hedge their bets - the game is not much different in principle to the throw of a (lethal) dice. Similarly, an external aid may thrive if nine out of ten clients fail so long as the tenth client succeeds well enough to make a case study that brings in new work, much as how a VC may thrive if nine out of ten portfolio startups go bust but the tenth blows up. "Fail fast" is thus often an argument that behooves them rather than you; it speeds up the process.

Following the advice to make big bets, inevitably made by those without skin in the game from a significantly more resilient position, is thus immensely risk. You face a potentially catastrophic effect on future performance primarily due to systemic uncertainty. They do not. Always remember that there is an absolutely massive difference between being a gamble and being a gambler.


What to do

So what should you do? Well, the obvious answer is to employ an adaptive strategy. But since that is what I sell, I am partial (even if correct). Instead, I will therefore recommend that you simply ask yourself the following questions:

  1. If you are being told to take risks, what does the person providing the advice actually mean in pragmatic terms? If you do not know, ask them to clarify - including what they mean by risk and why. What theoretical basis do they have for their argument?
  2. If they are using case studies to prove their point, ponder whether you are willing to take on the potential downsides as well as the potential upsides of their advice. Are you aware of the problems encountered along the way, or do you just hope to replicate a result? Can you afford a failure? Can they? What happened to the companies that tried the same and were unsuccessful? If they have none, why? (Samples should be randomized and not cherry picked.)
  3. If you consider going ahead, what needs to be true for their advice to work? Under what conditions might it work? What funds would you have to redistribute? Would there be investment lag effects and/or negative compounding? How would you know if you were successful and how would you scale it? What indicators would you use?

At the end of the proverbial day, and this long-winded post, all companies have to adapt to changing circumstances in order to avoid strategic drift. And yes, movement is inherently risky. But there are many ways in which one may mitigate those risks - not to play it entirely safe (that is impossible), but to limit exposure to potential points of catastrophic failure.

The winner of the championship is never the one who puts up the biggest score during the early rounds. It is always the one who survives to the end.

Brian Loucy

IT Transformation Expert | Process Improvement, Service Resiliency, Cost Optimization

6 个月

Refreshingly written, avoiding the often squishiness of "consultant speak." And thanks for defining/describing the deeper concepts, great stuff to consider for both consultants and executives alike.

Karim Jan, PMP?

Commercial leader delivering growth in the energy sector

6 个月

Thanks, very good article. I think that not taking risks is risky but those risks that are taken need to be manageable. I've worked in businesses that get paid to absorb significant execution risks. The only reason they sell is that they'll absorb risky scope that their clients cannot.

Dave Snowden

The Cynefin co

6 个月

I think ISO3001 is part of the issue, it starts with objectives …

Ian Snape

CEO Frontline Mind

6 个月

Enjoyed this piece JP Castlin. I'd like more discussion on fat tails, how to create systems that don't overly reward defensive political decision making (don't we all want that?), and the benefit of coupling risk & opportunity in portfolio design...

要查看或添加评论,请登录

JP Castlin的更多文章

  • Beyond the Bug: The CrowdStrike Affair

    Beyond the Bug: The CrowdStrike Affair

    Summary: A faulty piece of software from cybersecurity firm CrowdStrike caused global IT meltdowns. While the impact of…

    2 条评论
  • WTF is a competitive advantage?

    WTF is a competitive advantage?

    A recent post made me revisit one of the most pressing questions in strategy: what on earth is a competitive advantage?…

    1 条评论
  • The 4E Model of Market Dynamics

    The 4E Model of Market Dynamics

    Before we begin, a much needed clearing of the throat. What follows is inherently reductionist, which means that it…

    1 条评论
  • Risk: The Follow-Up

    Risk: The Follow-Up

    Yesterday, out of sheer frustration with the poor quality of popular discourse concerning corporate risk-taking, I…

    1 条评论
  • Beyond the original core

    Beyond the original core

    The term core competence was first formally introduced by C.K.

    4 条评论
  • Forss & Castlin: On Brands

    Forss & Castlin: On Brands

    Given that the discussion about what brands are (and are not) has recently been making the rounds again, Gary Rivers…

    3 条评论
  • Competitive Considerations

    Competitive Considerations

    This is an excerpt from the 2021 Castlin Manifesto. To obtain the full e-book, become a paying subscriber to Strategy…

    1 条评论
  • Laplace & Demons

    Laplace & Demons

    This text is taken from a Strategy in Praxis newsletter. Pierre-Simon, marquis de Laplace, was unquestionably a genius.

    5 条评论
  • Where are the grown-ups?

    Where are the grown-ups?

    The Flynn effect, first noticed by intelligence researcher James Flynn, refers to the tendency of IQ scores to change…

    22 条评论
  • 2021 Castlin Manifesto Excerpt: Putting Things into Perspective

    2021 Castlin Manifesto Excerpt: Putting Things into Perspective

    Below is the opening chapter of the 2021 Castlin Manifesto. To obtain the full digital mini-book, which includes 70+…

社区洞察

其他会员也浏览了