Why Predictions Fail: 'The Signal and The Noise' by Nate Silver
Dima Syrotkin ????
CEO Pandatron: AI coach driving organizational performance | Researcher | ACMP Board Member
This book has some interesting points, but for some reason, it didn't capture me. Probably because most of the information wasn't immediately applicable. It also went quite a bit in-depth, which is great if you really want to understand the topic but can also be boring otherwise.
My grade of the book: B- (using the American grading system) - recommend to check out, especially if interested in the topic.
Below are the top 5 most interesting quotes and ideas I picked from the book.
Top 5: A vast majority of predictions fail
We need to stop, and admit it: we have a prediction problem. We love to predict things—and we aren’t very good at it.
Top 4: Distinguishing the signal and the noise
Distinguishing the signal from the noise requires both scientific knowledge and self-knowledge: the serenity to accept the things we cannot predict, the courage to predict the things we can, and the wisdom to know the difference.
If we make a prediction and it goes badly, we can never really be certain whether it was our fault or not, whether our model was flawed or we just got unlucky. The closest approximation to a solution is to achieve a state of equanimity with the noise and the signal, recognizing that both are an irreducible part of our universe, and devote ourselves to appreciating each for what it is.
Top 3: Watch out for bad incentives
But forecasters often resist considering these out-of-sample problems. When we expand our sample to include events further apart from us in time and space, it often means that we will encounter cases in which the relationships we are studying did not hold up as well as we are accustomed to. The model will seem to be less powerful. It will look less impressive in a PowerPoint presentation (or a journal article or a blog post). We will be forced to acknowledge that we know less about the world than we thought we did. Our personal and professional incentives almost always discourage us from doing this.
Top 2: A major reason that predictions fail is that predictors often don't take model uncertainty into account.
As an example, say a prediction reads: “Next year, the GDP will grow by 2.7 percent”. This conclusion could’ve resulted from a system output which said that there’s a 90% chance that GDP will be between 1.3% and 4.2%. So they are just taking the average of that, which is misleading. Overall, the track record for economic predictions is 50% accurate.
Top 1: Think probabilistically (a Bayesian approach)
Take an example in which breast cancer occurs in 1.4% of women in their forties. This is a long-standing “prior probability”. A woman then gets a mammogram, and the mammogram shows a positive result. We then find out that mammograms discover breast cancer accurately only 75% of the time, and will actually produce a false-positive 10% of the time. Plugging all the numbers into Bayes theorem, how likely is it that this woman actually has breast cancer? The surprising result is that the likelihood she has breast cancer is still only 10%. This is confirmed by clinical data. In this case, the false positive of the erroneous mammogram test has tricked us into believing that new evidence outweighed the previous data, which was statistically correct and outlined the overall low chance of a woman developing breast cancer in her 40s. (1.4%).
===
Thanks for sticking till the end! I publish book reviews every Wednesday. Are you curious about what I do in my job as a startup CEO?
Panda Training is a Finnish startup. We are passionate about strategy, learning, and human development. We provide a micro-coaching service (human and/or chatbot) that allows companies to drive strategic initiatives and gather data from “the shop floor". Working with companies like Universal Pictures, SAP, Cramo. https://panda-training.com