How likely is likely?
Stephen Aldridge
Helping firms manage uncertainty and make better decisions using financial modelling. Consultant | Accountant | NED
How often have you heard someone forecast the probability of an event using phrases such as ‘likely’ or ‘probably’?
A group of NATO Military officers were asked to quantify the probability that common phrases like these were represented in a report. The range for ‘likely’ was 30% to 88%, and 25% to 90% for ‘probably’. Such phrases were being used as a basis for decisions.
Imagine how different the decisions would be if an event ‘will probably happen’ can mean 25% probability compared to 90% probability. But that’s not the only problem. We’re still not very good at interpreting a percentage either.
As we’ve touched on previously, humans aren’t generally good with uncertainty. If the weather forecast tells us the likelihood of rain tomorrow is 20%, and then it rains, we often think the weather forecast was wrong, as we’d really like a definite yes or no. A ‘maybe’ just isn’t satisfactory.
Of course, if we have a whole month where the forecast every day is a 20% chance of rain, it will rain on about six of those days. Short-range weather forecasts are now remarkably accurate, but the weather is a ‘chaotic’ system, where tiny input changes can result in big variations in outcome. Weather forecasts simulate thousands of minutely different starting conditions to calculate their accuracy, which is a bit like running the same day over and over to see what percentage of the days have rain.
Decision-makers need to get much more comfortable with thinking in probabilities. And one particular group of people has demonstrated that they are good at forecasting a likelihood – the ‘super forecasters’.
The first group of super forecasters were identified a little over a decade ago when IARPA, a US Intelligence Agency, ran a contest to find out who could make the most accurate forecasts of future events that impacted US national security. There were five teams, mainly comprised of professional analysts and scientists. However, one team was made up of amateurs. They were named the Good Judgment Project (organised by Philip Tetlock, author of the book ‘Superforecasting’). The GJP team eventually went on to win the contest.
There were some key features in their approach which made a difference, including:
- Taking the outside view – for each question they tried to forecast, they would look for a ‘background incidence’ – i.e. the probability of this sort of event occurring (based on similar historical events)
- They sought out more information – this helped them avoid the temptation to rely on what they found early on
- Specifics of the case – the factors that would cause them to move away from their start point of the background incidence
- Critically challenged thinking – they deliberately sought out contradictory evidence and used competing hypotheses to overcome confirmation bias
- ‘Wisdom of crowds’ – this refined their forecasts (note that wisdom of crowds only works if the crowd make independent forecasts before averaging them, and may require a crowd of ‘experts’ in cases outside of ‘common knowledge’).
An important factor in all the questions asked was that they were time-bound and unambiguous. On a specific date, a definitive result would be known. This allowed the teams to be objectively measured for accuracy.
Learning from the super forecasters' approach can go a long way to improving our own forecasts. You can read more about this in the book ‘Superforecasting’ by Philip Tetlock and Dan Gardner.
You can receive our next and final article in our decision-making series by subscribing here.
Putting data science into the middle of Lidl
1 年Thanks Stephen Aldridge, a very useful summary.