“Superforcastingâ€: lessons for project estimators?
A “Superforecaster†is a person whose predictions in a particular field, when aggregated and scored, prove to be more accurate than the public and acknowledged authorities. Research has shown that such people follow a repeatable methodology called CHAMP (Comparisons, Historical trends, Average opinions, Mathematical models, and Predictable biases). My intention, in this post, is to explore in outline how ‘Superforcasting’ techniques can be applied by teams estimating task duration and delivery capacity.
The concept of a Superforecaster was publicised in the book “Superforcasting†by Tetlock and Gardner (Tetlock, 2015) in which the authors report some of the findings of the Good Judgement Project (Good Judgement Open, 2020), started in 2011 by staff at the University of Pennsylvania as a competitor in the Aggregative Contingent Estimation programme of IARPA (Office of the Director of National Intelligence, 2020). In an appendix to the book, the authors list “Ten Commandments for Aspiring Superforecasters†which distil the lessons in the book.
1. Triage: “Focus on those questions where your hard work is likely to pay offâ€; there is little value in estimating a task that is too large or one that is too small, the forecast for the large will be subject to too much variation to be useful and a highly accurate forecast for a small task will have little impact.
2. Break seemingly intractable problems into tractable sub problems: “Decompose the problem into its knowable and unknowable parts. Flush ignorance into the open. Expose and examine your assumptionsâ€; estimation is a process of analysis, both of the nature of the task and of the teams understanding of it and skills in its execution.
3. Strike the right balance between inside and outside views: “Superforecasters know that there is nothing new under the sun…conduct creative searches for comparison classes even for seemingly unique eventsâ€; the ‘inside’ view of a task focuses on its uniqueness and how it could unfold while the ‘outside’ view of a task focuses on how it is like other, historical tasks and how they unfolded.
4. Strike the right balance between under- and overreacting to evidence: “…Superforecasters …move their probability estimates fast in response to diagnostic signalsâ€; any estimate should not be ‘set in stone’, identify when circumstances surrounding the original estimate change and update it accordingly.
5. Look for clashing causal forces at work in each problem: “For every good policy argument, there is typically a counterargument that is at least worth acknowledgingâ€; debate and constructive disagreement is a vital part of any estimating process.
6. Strive to distinguish as many degrees of doubt as the problem permits but no more: “Few things are certain or impossible. And “maybe†isn’t all that informative. So your uncertainty dial needs more than three settingsâ€; every estimate requires a confidence assessment associated with it, is the estimate made with 90% confidence, or with 50%? The difference is important.
7. Strike the right balance between under- and overconfidence, between prudence and decisiveness: “Superforecasters understand the risks both of rushing to judgement and of dawdling too long near ‘maybe’â€; the earlier an estimate is made, the more useful it is in the planning process, but the more likely to be wrong; the only certain value comes after the task is completed at which point it is useless.
8. Look for the errors behind your mistakes but beware of rear-view-mirror hindsight biases: “Conduct unflinching post-mortems…(and) don’t forget to do post-mortems on your successes tooâ€; a team’s estimating should be a central focus of retrospectives and continuous improvement programmes.
9. Bring out the best in others and let others bring out the best in you: “Master the fine art of team management, especially perspective taking…, precision questioning…, and constructive confrontationâ€; estimation should always be a group effort.
10. Master the error-balancing bicycle: “Learning requires doing, with good feedback that leaves no ambiguity about whether you are succeeding –‘I’m rolling along smoothly’ – or whether you are failing – ‘crash’!â€; the team must measure its success in estimating and respond to the lessons that the metrics provide.
11. Do not treat commandments as commandments; “Guidelines are the best we can do in a world where nothing is certain or exactly repeatableâ€; there is no systematic and repeatable approach to estimating despite many published methodologies and marketed tools.
Works Cited
Good Judgement Open. (2020, November 6). Welcome to Good Judgement Open. Retrieved from Good Judgement Open: https://www.gjopen.com/
Office of the Director of National Intelligence. (2020, November 6). Aggregate Contingent Estimation . Retrieved from Inteligence Advanced Research Projects Activity: https://www.iarpa.gov/index.php/research-programs/ace
Tetlock, P. G. (2015). Superforcasting: The Art and Science of Prediction. London: Random House.