Early in the first chapter of their 2015 book, Superforecasting: The Art and Science of Prediction, Philip Tetlock and Dan Gardner state: “…forecasting is not a ‘you have it or you don’t’ talent. It is a skill that can be cultivated. This book will show you how.” Based on the body of research on forecasting (with focus on Philip Tetlock’s long-term studies), they conclude that:
From Chapter 1, “An Optimistic Skeptic”: “Unpredictability and predictability coexist uneasily in the intricately interlocking systems that make up our bodies, our societies, and the cosmos. How predictable something is depends on what we are trying to predict, how far into the future, and under what circumstances. …Foresight isn’t a mysterious gift bestowed at birth. It is the product of particular ways of thinking, of gathering information, of updating beliefs. These habits of thought can be learned and cultivated by any intelligent, thoughtful, determined person. …superforecasting demands thinking that is open-minded, careful, curious, and—above all—self-critical. It also demands…commitment to self-improvement…”
From Chapter 2, “Illusions of Knowledge”: “The key is doubt. Scientists can feel just as strongly as anyone else that they know The Truth. But they know they must set that feeling aside and replace it with finely measured degrees of doubt—doubt that can be reduced (although never to zero) by better evidence from better studies.”
From Chapter 3, “Keeping Score”: “Forecasts must have clearly defined terms and timelines. They must use numbers. And one more thing is essential: we must have lots of forecasts. …The many forecasts required for calibration calculations make it impractical to judge forecasts about rare events…”
From Chapter 4, “Superforecasters”: “…we should not treat the superstars of any given year as infallible… Luck plays a role and it is only to be expected that the superstars will occasionally have a bad year and produce ordinary results—just as superstar athletes occasionally look less than stellar. But more basically, and more hopefully, we can conclude that the superforecasters were not just lucky. Mostly, their results reflected skill.”
From Chapter 5, “Supersmart?”: “…it seems intelligence and knowledge help but they add little beyond a certain threshold—so superforecasting does not require a Harvard PhD and the ability to speak five languages. …For superforecasters, beliefs are hypotheses to be tested, not treasures to be guarded.”
From Chapter 6, “Superquants?”: “…a central feature of superforecasters: they have a way with numbers. Most aced a brief test of basic numeracy… While superforecasters do occasionally deploy their own explicit math models, or consult other people’s, that’s rare. The great majority of their forecasts are simply the product of careful thought and nuanced judgment. …Thanks in part to their superior numeracy, superforecasters, like scientists and mathematicians, tend to be probabilistic thinkers. …A probabilistic thinker will be less distracted by ‘why’ questions and focus on ‘how.'”
From Chapter 7, “Supernewsjunkies?”: “Superforecasters update much more frequently, on average, than regular forecasters. …the forecaster who carefully balances old and new captures the value in both—and puts it into her new forecast. The best way to do that is by updating often but bit by bit. …What matters far more to the superforecasters than Bayes’ theorem is Bayes’ core insight of gradually getting closer to the truth by constantly updating in proportion to the weight of the evidence.”
From Chapter 8, “Perpetual Beta”: “To be a top-flight forecaster, a growth mindset is essential. …superforecasters…are as keen to know how they can do better as they are to know how they did. …The strongest predictor of rising into the ranks of superforecasters is perpetual beta, the degree to which one is committed to belief updating and self-improvement. It is roughly three times as powerful a predictor as its closest rival, intelligence.”
From Chapter 9, “Superteams”: “On average, when a forecaster did well enough in year 1 to become a superforecaster, and was put on a superforecaster team in year 2, that person became 50% more accurate. An analysis in year 3 got the same result. Given that these were collections of strangers tenuously connected in cyberspace, we found that result startling. Even more surprising was how well superteams did against prediction markets. …superteams beat prediction markets by 15% to 30%. …How did superteams do so well? By avoiding the extremes of groupthink and Internet flame wars. And by fostering minicultures that encouraged people to challenge each other respectfully, admit ignorance, and request help.”
From Chapter 10, “The Leader’s Dilemma”: “The humility required for good judgment is not self-doubt—the sense that you are untalented, unintelligent, or unworthy. It is intellectual humility. It is a recognition that reality is profoundly complex, that seeing things clearly is a constant struggle, when it can be done at all, and that human judgment must therefore be riddled with mistakes.”
From Chapter 11, “Are They Really So Super?”: “What makes them so good is less what they are than what they do—the hard work of research, the careful thought and self-criticism, the gathering and synthesizing of other perspectives, the granular judgments and relentless updating. …But the continuous self-scrutiny is exhausting, and the feeling of knowing is seductive. Surely even the best of us will inevitably slip back into easier, intuitive modes of thinking. …So how long can superforecasters defy the laws of psychological gravity? The answer to that depends on how heavy their cognitive loads are. …Taleb, Kahneman, and I agree there is no evidence that geopolitical or economic forecasters can predict anything ten years out beyond the excruciatingly obvious… These limits on predictability are the predictable results of the butterfly dynamics of nonlinear systems. …in the big scheme of things, human foresight is puny, but it is nothing to sniff at when you live on that puny human scale.”
From Chapter 12, “What’s Next?”: “While we may assume that a superforecaster would also be a superquestioner, and vice versa, we don’t actually know that. Indeed, my best scientific guess is that they often are not. The psychological recipe for the ideal superforecaster may prove to be quite different from that for the ideal superquestioner, as superb question generation often seems to accompany a hedgehog-like incisiveness and confidence that one has a Big Idea grasp of the deep drivers of an event. That’s quite a different mindset from the foxy eclecticism and sensitivity to uncertainty that characterizes superb forecasting.”
In summary, investors may find Superforecasting a counterbalancing context for an incessant stream of vague and unaccountable financial market forecasts, and thereby useful for avoiding overreaction.
Cautions regarding conclusions include:
- The forecasting described in the book does not rise (or fall) to the level of measuring whether good forecasting is exploitable in an investment sense.
- As discussed in the book, good forecasting is hard work apparently requiring a high level of numeracy. Obtaining good forecasts may therefore be costly.
- Also as discussed in the book, the outputs of real-world forecasters are often vague (and therefore difficult to grade) and skewed by motives other than accuracy.
For other perspectives, see “Guru Grades” and browse results of a search on “forecasting”. For depth on foxes versus hedgehogs, see “Expert Political Judgment: How Good Is It? How Can We Know? (Chapter-by-Chapter Review)”.