If Ulysses was an information risk professional…

If Ulysses was an information risk professional…

A very important part of risk management involves assessing the performance of your analysis, estimations and decision making - against actual outcomes. Unfortunately, this activity is too often overlooked or unenforced, even in the most advanced, well-resourced and capable organisations.

Recording, tracking and reviewing risk estimates is the only way you can determine if your approach to managing risk is working effectively and delivering value to the organisation. In the absence of checking your performance against outcome, you might be:

  1. Doing great
  2. Experiencing a placebo effect
  3. Causing more harm than good.

You just don’t know!


Of course, such performance monitoring doesn’t only apply to risk. It also applies to opportunity. Think strategic goals, sales targets and customer satisfaction. Measuring performance of estimates and the underlying process helps us to reduce uncertainty over time and in a meaningful way.

In her book Thinking in Bets (see your 2024 reading list) Annie Duke, a former World Series Poker champion and now business consultant, examines decisions versus outcomes. The first chapter - Life is Poker, Not Chess - provides great insight into to the relationship (or more correctly the disconnect) between decision and outcome.

In chess all the pieces, available moves and strategy are available to both players. Nothing is concealed. The better player (person or computer) will almost always win. A Grandmaster could look at a chess board mid-game and tell you the outcome of the game, unless the better player makes an identifiable mistake.

Poker, on the other hand, is a game of “incomplete information” and “decision making under conditions of uncertainty over time”. This sounds a lot like game theory and accurately reflects the circumstances we face when managing information risk.

Whether the target is politics, business, sport or another field, Duke explains “even the best decision doesn’t yield the best outcome every time. There’s always an element of luck that you can’t control, and there is always information that is hidden from view”.

When managing information risk, uncertainty reigns. A great deal of information is concealed, while chance can tip the balance and affect the outcome of business decisions and activity. This is why Duke encourages Thinking in Bets and learning from the results of our decisions on an ongoing basis.

Have you reviewed your last 10, 50 or 100 decisions?


Annie Duke looks at the business world through the lens of a successful poker player (there are plenty of unsuccessful ones). While most of us crave certainty, successful poker players are comfortable with well-planned decisions that don’t always lead to great outcomes. They also accept that mistakes and poor decisions can still result in positive outcomes. This is where luck, chance and randomness come in.


In his book The Signal and the Noise (on your 2024 reading list), Nate Silver shares insight obtained from Haralabos "Bob" Voulgaris, a professional gambler and prolific basketball (NBA) sports bettor.

Silver’s conclusions from studying Voulgaris as well as other successful gamblers and forecasters align with those of Duke. He writes “they do not think of the future in terms of no-lose bets, unimpeachable theories, and infinitely precise measurements”. Instead, they “think of the future as speckles of probability, flickering upward and downward like a stock market ticker to every new jolt of information”. Poetic and worthy of a James Lee Burke passage!


Have you ever applied rigorous and structured analysis, made an informed decision and then been surprised at a bad result? Maybe that product or service that you painstakingly researched, reviewed and selected, only to see it crash and burn.

Have you had an unexpected positive outcome when you didn’t apply the best approach? Maybe you made a snap decision, with some trepidation, to hire a new member of the team and they turned out to be brilliant. It happens.


“Luck is a thing that comes in many forms and who can recognize her?” Ernest Hemingway, The Old Man and the Sea


You could be making poor risk-based decisions and getting lucky, and you could be making great risk-based decisions and being unlucky. How do you know? By tracking both decision and outcome over time.


Dan Gardner (author of Risk, and Superforecasting with Philip Tetlock) studies past predictions of experts and concludes “they were wrong”. In his book Future Babble, he writes "History is littered with their failed predictions. Whole books can be filled with them. Many have been". It’s a great read.

Rolf Dobelli in The Art of Thinking Clearly (the basis of my last post) highlights that “not one of the world’s estimated 100,000 political and security authorities foresaw” the Arab uprisings or tsunami/nuclear disaster in 2011.

Nate Silver highlights the “widespread failures of prediction” that accompanied the 2008 global financial crisis. He explains that in November 2007 “most economists still thought a recession of any kind to be unlikely”.

One exception, however, was Jan Hatzius the Chief Economist at Goldman Sachs, who Silver interviewed. Hatzius wrote a memo in late 2007 titled Leveraged Losses: Why Mortgage Defaults Matter, where he warned of “a scenario in which millions of homeowners could default on their mortgages and trigger a domino effect on credit and financial markets, producing trillions of dollars in losses and a potentially very severe recession”.

Ahem, ring any bells?


In Superforecasting, Tetlock draws on his famous 20-year study assessing the accuracy of predictions. He concludes that organisations pay for forecasts with no quality, which he likens to “dart throwing chimpanzees”.

He highlights an important fact we know but rarely appreciate, which is - experts are forecasting everywhere, but their accuracy is never checked. It appears “their conviction is enough to convince people, even though most forecasts turn out to be just guesses”. Ouch.


Are your information risk estimates a result of guessing? Or are they grounded in proven and meaningful techniques, that improve with accuracy over time?


To avoid falling foul of the “dart throwing chimpanzees”, Duke encourages the use of a Ulysses Contract. Also known as a Ulysses Pact or commitment device, a Ulysses Contract refers to a decision that a person agrees to in the present (now) and to which they are bound until a future date.

Ulysses contracts are used to hold individuals to account when they make a claim, forecast, prediction or similar. Something you should strive for with your services providers, budget holders and risk practitioners.


Origins of the Ulysses Contract

The origin of the Ulysses Contract begins in Homer’s epic poem The Odyssey, which tells the story of Ulysses, King of Ithaca, and his gruelling journey home to his wife and son, following the war between Greece and Troy.

During the journey, Ulysses faces many dangers, including the Sirens whose silky voices lure sailors and their ships on to the rocks. Ulysses makes a pact with his crew, ordering them to block their ears with wax and tie him to the mast of the ship while they steer past the Sirens.

His men remain committed to staying on course, despite the overwhelming influence of the Sirens to rid them of rational thought. They keep their promise, endure the threat and continue on their journey.


Organisations need a commitment device for their information risk practitioners, in order to:

  • develop and maintain meaningful information risk statements (with a high degree of confidence in their accuracy)
  • bind those statements and their corresponding decisions to an agreed future date (typically 12 months)
  • track and refine them on a continuous basis (as new risk-related information becomes available)
  • review their performance (accuracy), against each corresponding risk outcome, for the defined period (e.g. 12 months)
  • make changes to improve the accuracy of future information risk statements (for subsequent years).


As part of a broader, enterprise-wide information risk management capability, this approach will help to hold stakeholders accountable and drive improvement. However, critical to improvement and greater accuracy requires information risk practitioners to have knowledge, experience and expertise in vital risk analysis techniques, including:

  • basic statistical methods
  • calibration
  • decomposition
  • range estimation
  • confidence intervals.

And that’s a topic for a different post.


Brief takeaways

  1. Think like a (successful) poker player - Decouple decision and outcome.
  2. Implement a Ulysses Contract (commitment device) for greater accuracy of information risk statements.
  3. Take the long view – Accept bad luck and embrace good luck.
  4. Beware of dart throwing chimpanzees and Sirens, no matter how tempting or convincing their advice.
  5. Record and track estimates, rationale and decisions relating to information risk - Review, compare and contrast on a regular basis (e.g. every 12 months).


Related articles by Mark

A gorilla, an elephant and a horse walk into a bar…

Preparing your Risk Management Reading List for 2024


Article image (depicting thousands of crystal balls strewn across the information risk landscape) generated using https://www.img2go.com


#ulyssescontract #commitmentdevice #decisionvsoutcome #informationriskmanagement #cybersecurity

Really good article, thanks Mark

回复
Shreya Tiwari

Program Lead | Cyber Risk Management

1 年

Looking forward to being a more informed risk professional after going through those really insightful recommendations, thanks Mark!

I recently watched BBC's The Shuttle That Fell to Earth - a documentary about the final voyage of the shuttle Columbia. NASA is the one organisation that I would have expected to be good at doing thorough risk assessments and testing, and review decisions and expectations against actual outcomes. I hope and pray they have learned the lessons from this, as they relaunch their space programme. I am thankful and grateful that my failures to review and measure performance against my analyses haven't cost lives. But it is a salutary lesson that reminds me we cannot rest on our laurels and need to up our game all around.

I’m not sure that the failure to measure/assess the performance of our risk analysis, estimations and decision making against outcomes, is because doing so is overlooked or unenforced – I think we are so busy making these assessment on a continual basis that we don’t have the time to do it.?Generally speaking, I’m not sure that even Risk teams do this (I’ve not come across any).? Which decisions to review? Against what criteria? Who to review - us or someone more objective? I agree, it would be useful from an empirical perspective, but capturing data on ‘risk events’ – issues!?The risk has materialised! – and comparing it to assessed scale of the outcome/impact according to a firm’s own risk rating process doesn’t seem to be a standard thing, let alone storing data in such a way it can be analysed to better understand the amount and scale of historical issues and their impacts, and the impact that has on likelihood of recurrence to inform future risk ratings.? I’m probably too negative, and should seek to find ways to incorporate such reviews into the pattern of work, e.g. identifying at the outset this retrospective is needed.? Curious to know if anyone does do this, and if so, how do they find/make the time?

要查看或添加评论,请登录

Mark Chaplin的更多文章

社区洞察

其他会员也浏览了