Lessons from Super-forecasters

Lessons from Super-forecasters

I recently found myself caught up in a podcast about predictions and how to become better at making them. Part of my fascination was simply that I enjoyed the content of the show. Part of it was that, to my surprise, I heard some great tips and techniques that spoke directly to my work in the world of human performance technology (HPT).

The show was the Freakonomics Radio podcast How to Be Less Terrible at Predicting the Future, hosted by Stephen Dubner. Check it out. I highly recommend it.

If you work as a training consultant, an instructional designer, an evaluation or measurement expert, or any other supporter of organizational development, I think you'll find—the probability is greater than two-in-three (ahem)—that you'll find value in the podcast.

Here are my top-ten takeaways from the show:

1 – We should hold experts and pundits more accountable for the accuracy of their predictions.

  • "[Pundits] are notoriously bad at forecasting, in part because they aren’t punished for bad predictions. Also, they tend to be deeply unscientific." (Dubner)
  • "I think in your guys’ profession [sports reporting, punditry], you can easily take back what you say... there’s no danger when somebody says it. Y'know, if there was a pay cut or if there was an incentive, if picking teams each and every week, you may get a raise, I guarantee people would be watching what they say then." —Cam Newton (football player) on the lack of accountability for sports reporters on their predictions
  • "When you don’t have skin in the game, and you aren’t held accountable for your predictions, you can say pretty much whatever you want." (Dubner)
"When you don’t have skin in the game, and you aren’t held accountable for your predictions, you can say pretty much whatever you want." –Stephen Dubner

2 – For far too long, very smart people have been content to have little accountability for accuracy in forecasting.

  • "Experts think they know more than they do; they're systematically overconfident." –Philip Tetlock (political science writer, professor at Univ. of Pennsylvania, author of Superforecasting: The Art and Science of Prediction)
  • "A lot of the experts that we encounter, in the media and elsewhere, aren’t very good at making forecasts. Not much better, in fact, than a monkey with a dart board." (Dubner)

3 – One of the distinguishing characteristics of bad, overconfident forecasters is dogmatism.

  • A bad forecaster tends to have an unwillingness to change his/her mind in a reasonably timely way in response to new evidence. "They have a tendency, when asked to explain their predictions, to generate only reasons that favor their preferred prediction and not to generate reasons opposed to it." (Tetlock)
We are predisposed toward interpreting data in a way that confirms our bias or our priors or the decision we want to make.

4 – Forecasting is everywhere. We do it, and rely on it, far more than we realize. And yet we rarely measure the accuracy of our forecasts.

  • "People often don’t recognize how pervasive forecasting is in their lives — that they’re doing forecasting every time they make a decision about whether to take a job or whom to marry or whether to take a mortgage or move to another city. We make those decisions based on implicit or explicit expectations about how the future will unfold. We spend a lot of money on these forecasts. We base important decisions on these forecasts. And we very rarely think about measuring the accuracy of the forecasts." (Tetlock)
People often don’t recognize how pervasive forecasting is in their lives. And yet we very rarely think about measuring the accuracy of the forecasts. –Philip Tetlock

5 – One of the great historical examples of bad forecasting with dire consequences was the Bay of Pigs Invasion (1961).

  • "The Kennedy administration asked the Joint Chiefs of Staff to do an independent review of the plan and offer an assessment of how likely this plan was to succeed. And I believe the vague-verbiage phrase that the Joint Chiefs analysts used was they thought there was a 'fair chance of success.' It was later discovered that by 'fair chance of success' they meant about one-in-three. But the Kennedy administration did not interpret 'fair chance' as being one-in-three. They thought it was considerably higher. So, it’s an interesting question of whether they would have been willing to support that invasion if they thought the probability were as low as one-in-three." (Tetlock)
  • "We are predisposed toward interpreting data in a way that confirms our bias or our priors or the decision we want to make. So, if I am inclined toward action and I see the words 'fair chance of success,' even if attached to that is the probability of 33 percent, I might still interpret it as a move to go forward." (Dubner)
Foresight isn’t a mysterious gift bestowed at birth. It is the product of particular ways of thinking, of gathering information, of updating beliefs. These habits of thought can be learned and cultivated by any intelligent, thoughtful, determined person.

6 – Vague-verbiage forecasts can be mischievous.

  • "In a vague-verbiage forecast ('fair chance of success'), it is very easy to hear what we want to hear. There’s less room for distortion if you say 'one-in-three' or 'two-in-three' chance. There's a big difference between a one-in-three chance of success and a two-in-three chance of success." (Tetlock)

7 – Super-forecasters tend to have the following characteristics:

  • Do not believe in fate, but do believe in chance
  • Humble about their judgements
  • Actively open-minded
  • Good with numbers, but don't necessarily know deep-math
  • Use an outside-in view, rather than the inside-out view
  • Curious
  • Strong work-ethic
  • Above-average intelligence
  • Understand probability
Super-forecasters tend to be open-minded, curious, and humble about their judgements. They also understand probability and tend to believe in chance but not fate.

8 – Practical Recommendations for Aspiring Super-forecasters:

  • Focus on questions where your hard work is likely to pay off.
  • Break seemingly intractable problems into tractable sub-problems.
  • Strike a balance between under- and over-reacting to the evidence.
  • Look for the errors behind your mistakes but beware of rear view-mirror hindsight biases.
  • Bring out the best in others and let others bring out the best in you.

9 – Super-forecasting is a set of skills that can be acquired and improved upon with practice.

  • "Just as you can’t learn to ride a bicycle by reading a physics textbook, you can’t become a super-forecaster by reading training manuals. Learning requires doing, with good feedback that leaves no ambiguity about whether you are succeeding or failing.” (Dubner)
  • "Forecasters believe that probability estimation of messy real-world events is a skill that can be cultivated and is worth cultivating. And hence they dedicate real effort to it. But if you shrug your shoulders and say, 'Look, there’s no way we can make predictions about unique historical events,' you’re never going to try."
Forecasters believe that probability estimation of messy real-world events is a skill that can be cultivated and is worth cultivating. And hence they dedicate real effort to it.

10 – If, as a culture, we placed greater value on the accuracy of our predictions, we would improve the quality of public debate.

  • "If partisans in debates felt that they were participating in [events] in which their accuracy could be compared against that of their competitors, we would quite quickly observe the depolarization of many polarized political debates. People would become more circumspect, more thoughtful and I think that would on balance be a better thing for our society and for the world." (Tetlock)

Here's another link to the original podcast. Enjoy.

If you're interested in improving the accuracy of your forecasts related to your organizational learning initiatives, contact Peregrine Performance Group today.

Follow Russ on Twitter, and Peregrine Performance Group on TwitterLinkedIn and Facebook.


Alan Woontner

Learning and Performance Consultant

8 年

A lot to take in with this article. Forecasting obviously requires a lot of different type of reflection noted in the article, some of which I am good at, others less so. So, it strikes me that this is really a discipline that needs to be both studied from multiple angles and applied over time. Reflection on our past experience is certainly a component of how to forecast future events. So often, we dont get the opportunity to do post mortems and lessons learned, and when we do its not evidence-based, its just a gripe session with some insights/opinions of what went wrong and speculations as to why it went wrong. I always joined when i was in an Analyst role as that gave me the opportunity to look into the different factors impacting programs and provide suggestions based on what I learned.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了