How likely is likely?

How likely is likely?

How often have you heard someone forecast the probability of an event using phrases such as ‘likely’ or ‘probably’? 

A group of NATO Military officers were asked to quantify the probability that common phrases like these were represented in a report. The range for ‘likely’ was 30% to 88%, and 25% to 90% for ‘probably’. Such phrases were being used as a basis for decisions.

Imagine how different the decisions would be if an event ‘will probably happen’ can mean 25% probability compared to 90% probability. But that’s not the only problem. We’re still not very good at interpreting a percentage either.

As we’ve touched on previously, humans aren’t generally good with uncertainty. If the weather forecast tells us the likelihood of rain tomorrow is 20%, and then it rains, we often think the weather forecast was wrong, as we’d really like a definite yes or no. A ‘maybe’ just isn’t satisfactory.

Of course, if we have a whole month where the forecast every day is a 20% chance of rain, it will rain on about six of those days. Short-range weather forecasts are now remarkably accurate, but the weather is a ‘chaotic’ system, where tiny input changes can result in big variations in outcome. Weather forecasts simulate thousands of minutely different starting conditions to calculate their accuracy, which is a bit like running the same day over and over to see what percentage of the days have rain.

Decision-makers need to get much more comfortable with thinking in probabilities. And one particular group of people has demonstrated that they are good at forecasting a likelihood – the ‘super forecasters’. 

The first group of super forecasters were identified a little over a decade ago when IARPA, a US Intelligence Agency, ran a contest to find out who could make the most accurate forecasts of future events that impacted US national security. There were five teams, mainly comprised of professional analysts and scientists. However, one team was made up of amateurs. They were named the Good Judgment Project (organised by Philip Tetlock, author of the book ‘Superforecasting’). The GJP team eventually went on to win the contest.

There were some key features in their approach which made a difference, including:

  1. Taking the outside view – for each question they tried to forecast, they would look for a ‘background incidence’ – i.e. the probability of this sort of event occurring (based on similar historical events)
  2. They sought out more information – this helped them avoid the temptation to rely on what they found early on
  3. Specifics of the case – the factors that would cause them to move away from their start point of the background incidence
  4. Critically challenged thinking – they deliberately sought out contradictory evidence and used competing hypotheses to overcome confirmation bias
  5. ‘Wisdom of crowds’ – this refined their forecasts (note that wisdom of crowds only works if the crowd make independent forecasts before averaging them, and may require a crowd of ‘experts’ in cases outside of ‘common knowledge’).

An important factor in all the questions asked was that they were time-bound and unambiguous. On a specific date, a definitive result would be known. This allowed the teams to be objectively measured for accuracy. 

Learning from the super forecasters' approach can go a long way to improving our own forecasts. You can read more about this in the book ‘Superforecasting’ by Philip Tetlock and Dan Gardner.

You can receive our next and final article in our decision-making series by subscribing here.



Paul Asbury

Putting data science into the middle of Lidl

1 年

Thanks Stephen Aldridge, a very useful summary.

要查看或添加评论,请登录

Stephen Aldridge的更多文章

  • Did forecasting die of COVID?

    Did forecasting die of COVID?

    Not a single model I saw before the pandemic had a “Global Pandemic” as a scenario, yet Covid has changed the world. As…

  • Elon’s future - Paradise or Purgatory?

    Elon’s future - Paradise or Purgatory?

    Elon Musk told UK Prime Minister, Rishi Sunak that AI will ‘be able to do everything’ and people will only have to work…

    2 条评论
  • Reducing noise in decision making

    Reducing noise in decision making

    What can an Ox tell us about how to run meetings better? In 1906, Francis Galton studied the judgments made by…

    1 条评论
  • Noise in decision making

    Noise in decision making

    There is (another) silent peril undermining our decision making. Unlike bias, which by now, most of us recognise, it…

  • How we deceive ourselves

    How we deceive ourselves

    A lot of what we do during a typical day is done on ‘autopilot’, accomplished by our 'system 1' thinking – for example,…

  • Thinking Fast and Slow

    Thinking Fast and Slow

    It would be difficult to write a series of articles on decision-making without referencing Daniel Kahneman, who wrote…

    2 条评论
  • The problem with business decision-making

    The problem with business decision-making

    One key problem with making decisions is that the big ones don’t come along all that often. You get one shot at making…

  • Have you ever made a bad decision?

    Have you ever made a bad decision?

    If you have ever made a decision that didn’t work out as you had hoped, it probably just leapt into your mind, perhaps…

    2 条评论
  • Don't let your optimism doom your start-up

    Don't let your optimism doom your start-up

    I’ve been immersing myself in decision-making and forecasting for a quarter century or more now, with a focus on…

  • Join a team with genuine values

    Join a team with genuine values

    Have you ever read something full of buzzwords and jargon and thought, ‘gobbledegook!’? I have, and funnily enough it…

    7 条评论

社区洞察

其他会员也浏览了