Superforecasting is the result of decades of research on “superforecasters”: people who can predict future events with an accuracy better than chance. Superforecasters are intelligent, but more importantly, they’re open-minded, deeply curious, and adept at sidestepping their own cognitive biases. Not everyone is cut out to be a superforecaster, but by studying the way superforecasters make predictions, anyone can improve their ability to predict the future.
Superforecasting has two authors: Dan Gardner, a journalist and author of three books on the science of prediction; and Philip Tetlock, a psychologist and pioneering forecasting researcher. Tetlock is the co-founder of two major, research-focused forecasting tournaments: Expert Political Judgment and the Good Judgment Project.
When the authors use the term “forecasting,” they’re referring to formal predictions expressed with numerical probabilities. To appreciate the value of forecasting, we have to frame it the right way. Tetlock learned this the hard way when the data from his pioneering research found that the majority of expert forecasts were no more accurate than chance (which the popular press misinterpreted to mean “forecasting is pointless''). Additionally, the predictions these experts made within their fields of expertise were less accurate than predictions they made outside their fields. In other words, intelligent analysts who invested time and effort into researching the issues were no more able to predict future events than if they’d guessed randomly.
(Shortform note: Why are experts seemingly so inaccurate, even within their own fields? Economics researcher Bryan Caplan points out one possible explanation of this core finding: Tetlock purposefully asked the experts challenging questions about their fields. Caplan surmises that when faced with these questions, experts become overconfident in their predictions, hence why they’re incorrect more often. He argues that if Tetlock had asked questions to which there are already well-established answers within the experts’ field (which Tetlock deliberately didn’t do), the experts’ prediction accuracy would have been higher. Caplan concludes that while forecasters admittedly need to stop being so overconfident in response to challenging questions, people should equally stop claiming that experts are useless at prediction: They can be accurate in the right circumstances.)
The authors argue that contrary to the media’s representation of Tetlock’s research, these results don’t mean that there is no value in forecasting. What Tetlock’s team discovered was that certain kinds of forecasters could make certain kinds of predictions with an accuracy much higher than chance. These forecasters, whom Tetlock calls “superforecasters,” apply a specific methodology to come up with their predictions, and they only make predictions a year into the future or less. Any further out, and accuracy rates drop dramatically. But when superforecasters apply their skills to short-term questions, they’re remarkably accurate. (Shortform note: We’ll be discussing superforecasters, their methods, and their traits extensively in Part 2.)
Forecasting accurately is incredibly difficult, but determining whether a forecast is accurate in the first place presents difficulties of its own. According to the authors, a forecast judged by different standards than the forecaster intended will be deemed a failure, even if it’s not.
For example, in 2007, Steve Ballmer, then-CEO of Microsoft, claimed that there was “no chance” that Apple’s iPhone would get “any significant market share." In hindsight, this prediction looks spectacularly wrong, but the wording is too vague to truly judge. What did he mean by “significant”? And was he referring to the US market or the global market?
According to the authors, these questions matter because the answers lead us to very different conclusions. Judged against the US smartphone market (where the iPhone commands 42% of the market share), Ballmer is laughably wrong. But in the global mobile phone market (not just smartphones), that number falls to 6%—far from significant. (Shortform note: In 2009, Ballmer admitted to seriously underestimating the iPhone, in effect contradicting the authors: Even Ballmer thinks that the prediction was bad after all.)
Judging the “Worst Tech Predictions” of All Time
Hero Labs, a technology company, compiled a list of 22 of the “worst tech predictions of all time,” including Ballmer’s infamous quip. However, unlike Ballmer’s forecast, most of the other predictions on the list are specific enough to judge. Here’s why:
They use unequivocal language. For example: In 1946, Darryl Zanuck of 20th Century Fox said, “Television will never hold onto an audience.” His use of the word “never” makes it easy to judge this forecast as completely false—in 2019, the television industry was worth $243 billion (and that’s only traditional network television, not including television streaming services like Netflix or Hulu).
They provide a time frame. For example: In 1998, economist Paul Krugman said, “By 2005, it will be clear that the internet’s impact on the global economy has been no greater than the fax machine.” By the end of 2005, [Amazon alone was already...
Unlock the full book summary of Superforecasting by signing up for Shortform .
Shortform summaries help you learn 10x better by:
READ FULL SUMMARY OF SUPERFORECASTING
Here's a preview of the rest of Shortform's Superforecasting summary:
Superforecasting is the result of decades of research on “superforecasters”: people who can predict future events with an accuracy better than chance. Superforecasters are intelligent, but more importantly, they’re open-minded, deeply curious, and adept at sidestepping their own cognitive biases. Not everyone is cut out to be a superforecaster, but by studying the way superforecasters make predictions, anyone can improve their ability to predict the future.
Phillip Tetlock is a Canadian-American author and researcher focusing on good judgment, political psychology, and behavioral economics. He currently teaches at the University of Pennsylvania in the Wharton School of business as well as the departments of psychology and political science.
In 2011, Tetlock and his spouse, psychologist Barbara Mellers, co-founded the Good Judgment Project, a research project involving more than twenty thousand amateur forecasters. The Project was designed to discover just how accurate the best human forecasters can be and what makes some people better forecasters than others. It led to Tetlock discovering a group of...
In this chapter, we’ll learn about the importance of measuring forecast accuracy, the philosophy that makes forecasting possible, what it means to be a “superforecaster,” and how superforecasters perform compared to computer algorithms.
Tetlock and Gardner argue that to make better forecasts, we have to be able to measure accuracy. Predictions about everything from global politics to the weather are not hard to come by. You find them on news channels, in bestselling books, and among friends and family. According to the authors, most of these predictions have one thing in common: After the event, no one thinks to formally measure how accurate they were. This lack of measurement means that you have no sense of how accurate any particular source usually is. Without that baseline, how do you know who to listen to the next time you need to make a decision?
The authors note that given how important accurate predictions are, it’s surprising that we have no standard way of measuring their accuracy. Instead, forecasters in popular media deliver their predictions with so much confidence that we take them at...
This is the best summary of How to Win Friends and Influence PeopleI've ever read. The way you explained the ideas and connected them to other books was amazing.
Given all the ways our brains can work against us, forecasting accurately is incredibly difficult. But determining whether a forecast is accurate in the first place presents difficulties of its own. According to the authors, a forecast judged by different standards than the forecaster intended will be deemed a failure, even if it’s not.
For example, in 2007, Steve Ballmer, then-CEO of Microsoft, claimed that there was “no chance” that Apple’s iPhone would get “any significant market share." In hindsight, this prediction looks spectacularly wrong, but the wording is too vague to truly judge. What did he mean by “significant”? And was he referring to the US market or the global market?
According to the authors, these questions matter because the answers lead us to very different conclusions. Judged against the US smartphone market (where the iPhone commands 42% of the market share), Ballmer is laughably wrong. But in the global mobile phone market (not just smartphones), that number falls to 6%—far from significant. (Shortform note: In 2009, [Ballmer admitted to...
Tetlock first discovered that some forecasters are more accurate than others thanks to a decades-long study called “Expert Political Judgment” (EPJ). As the authors describe, the results of the EPJ revealed that overall, the average expert’s predictions were no more accurate than chance. But a closer look at the data revealed two subgroups of forecasters: one that did no better (and sometimes much worse) than chance, and another that did slightly better.
The second group just barely surpassed the rate of chance, but even that slight edge statistically differentiated them from the first group. Tetlock named the first group “Hedgehogs'' and the second group “Foxes," based on Isaiah Berlin’s classic philosophy essay entitled “The Hedgehog and the Fox” (the title comes from a line from an ancient Greek poem: “The fox knows many things but the hedgehog knows one big thing”). (Shortform note: “Knowing many things'' is similar to the psychological concept of active open-mindedness. Actively open-minded thinkers are willing to consider other people’s ideas and opinions instead of clinging to their own point of view. As...
"I LOVE Shortform as these are the BEST summaries I’ve ever seen...and I’ve looked at lots of similar sites. The 1-page summary and then the longer, complete version are so useful. I read Shortform nearly every day."
By nature, we all lean more toward either fox or hedgehog thinking. This exercise will help you reflect on the way you think and identify biases that might be holding you back.
You may have recognized parts of yourself in the descriptions of hedgehogs and foxes. Overall, would you describe yourself as a hedgehog, a fox, or a hybrid? Why?
The authors concede that it would be reasonable to assume that superforecasters are just a group of geniuses gifted at birth with the power to see the future. Reasonable, but wrong. What makes superforecasters truly “super” is the way they use their intelligence to approach a problem. In the next few chapters, we’ll explore the specific mental tools that make superforecasters so accurate.
According to the authors, the reason superforecasters make such accurate predictions is that they’re adept at avoiding cognitive biases. We’re all prone to certain cognitive biases that stem from unconscious thinking, or what psychologists like Tetlock often describe as “System 1” thinking. These biases skew our judgment, often without us even noticing. Superforecasters constantly monitor and question their System 1 assumptions. (Daniel Kahneman describes the two-system model of thinking in Thinking, Fast and Slow; System 2 governs conscious, deliberate thinking, while System 1 functions automatically and unconsciously.)
(Shortform note: Kahneman argues that [System 1 is also prone to making...
This is the best summary of How to Win Friends and Influence PeopleI've ever read. The way you explained the ideas and connected them to other books was amazing.
We all use both System 1 and System 2 every day. This exercise highlights the way you draw on each of these in different ways.
See if you can answer the following question: “A bat and ball together cost $1.10. The bat costs a dollar more than the ball. How much does the ball cost?”
Thinking like a superforecaster requires coming up with new perspectives. Let’s practice creating multiple perspectives on a single issue.
Think of a hot-button issue that you personally don’t have strong feelings about (for example, if you’re not a sports fan, you could think about steroid use in professional athletes; if you don’t follow any particular diet, you could think about the ethics of eating meat). What would you think about this issue if you were passionately opposed to it? What would be your reasoning?
With Shortform, you can:
Access 1000+ non-fiction book summaries.
Highlight what
you want to remember.
Access 1000+ premium article summaries.
Take notes on your
favorite ideas.
Read on the go with our iOS and Android App.
Download PDF Summaries.
Despite all the emphasis on technique over talent, the fact that superforecasters work with such minute numerical details might lead you to believe that superforecasters are secretly an elite class of mathletes. According to the authors, you wouldn’t be completely wrong—superforecasters are almost universally skilled with numbers. What is surprising about that is that forecasting very rarely requires higher-level math skills.
(Shortform note: As an example, one superforecaster argued that he and his teammates never explicitly use Bayes’ rule (a mathematical equation for updating predictions) in their forecasts. However, Tetlock countered that while they may not actually whip out the equation for any particular problem, superforecasters are numerate enough to understand the principles of Bayes’ theorem better than most people. We’ll learn more about superforecasters’ Bayesian thinking in Chapter 7.)
So what does superforecasters’ superior numeracy have to do with their success? According to the authors, superforecasters are probabilistic thinkers. This goes beyond just phrasing their...
Most of us don’t naturally think like superforecasters. This exercise is a chance to examine your own thinking style and practice embracing uncertainty.
Life is full of uncertainties. Think of a recent time when you made a decision and were unsure what the outcome would be. What was the decision, and what were the two most likely possible results?
This is the best summary of How to Win Friends and Influence PeopleI've ever read. The way you explained the ideas and connected them to other books was amazing.
In this chapter, we’ll discuss another technique that the authors argue superforecasters use: the “outside-in” approach to forecasting. We’ll also discuss how superforecasters update their predictions based on new information they encounter after making their initial forecast. In IARPA-style tournaments, forecasting questions typically remain open anywhere from a few weeks to a few months, and forecasters can adjust their forecasts as often as they like during that window. The authors argue that these adjustments are crucial since a forecast that doesn’t take all the available data into account will likely be less accurate than an up-to-date forecast. (Shortform note: In Smarter Faster Better, author Charles Duhigg interviews poker master Annie Duke, who argues that updating her beliefs about her opponents (instead of sticking to her initial assumptions) helps her avoid prejudice, which in turn makes her less likely to underestimate her opponents.)
Tetlock and Gardner argue that when superforecasters first encounter a question, they begin by looking at the wide perspective of that...
By now, we’ve learned that superforecasters are smart, numerate, well-informed, and actively open-minded. All of these traits make them fantastic forecasters, but according to the authors, what sets superforecasters apart from other forecasters more than anything else is their persistent commitment to self-improvement, which we’ll discuss in this chapter.
Forecasting involves quite a bit of failure because forecasters are asked to predict the unpredictable. While no one enjoys being wrong, the authors argue that superforecasters are more likely than regular forecasters to see their failures as an opportunity to learn and improve. Educational psychologists call this a “growth mindset.” People with a growth mindset believe that talent and intelligence can be developed through learning and practice.
The idea behind the growth mindset seems intuitive, but in practice, the authors report that most of us gravitate towards a “fixed mindset” instead. The fixed mindset tells us that talent and intelligence are traits we’re born with, so practice can only strengthen the natural abilities that are already there.
This is the best summary of How to Win Friends and Influence PeopleI've ever read. The way you explained the ideas and connected them to other books was amazing.
A growth mindset doesn’t always come naturally, but the more you practice, the easier it gets. This exercise will get you started with pushing past frustration.
Think of a project in your work or personal life where you feel “stuck” (if this doesn’t apply to a current project, try to think of a time you felt this way in the past). What about this particular project makes it difficult?
Tetlock and Gardner note that in forecasting tournaments, superforecasters work in teams to create forecasts. The way these teams operate can make or break their predictions. The authors argue that these outcomes are more than the sum of each team member—efficient forecasting teams can reach a level of accuracy that no individual forecaster can reach on their own. (Shortform note: Superforecasters are such natural team players that grouping them into teams reduces the impact of bias on their forecasts even more than specific anti-bias training does.)
In this chapter, we’ll discuss the factors that make a good forecasting team, as well as the attributes that superforecasters have that make them team players.
Tetlock and Gardner argue that one of the biggest challenges for a successful team is to avoid groupthink, or the phenomenon in which well-bonded teams gradually lose sight of critical thinking. When groupthink takes hold, group members no longer challenge each other’s assumptions. Instead, they unconsciously develop a shared worldview, and...
This is the best summary of How to Win Friends and Influence PeopleI've ever read. The way you explained the ideas and connected them to other books was amazing.
When everyone feels safe enough to speak their mind, the whole group benefits. Learn how you can apply this to your own life.
Think of a group that you work with frequently (this could be a work team, a social group, or even your family). How does this group usually handle disagreements?
Superforecasting is an impressive skill, and certain superforecaster traits can turn good leaders into great ones. But Tetlock and Gardner caution that human brains are not built for objective analysis. As we’ve noted, even the best superforecasters are not immune to cognitive bias. So what is the point of developing superforecasting abilities if we are always one unchecked assumption away from being completely wrong? In this chapter, we’ll explore that question.
According to the authors, the value of any kind of forecasting is predicated on the assumption that it’s possible to predict meaningful future events in the first place. This is not a universally accepted idea.
One strong critic of superforecasting is author and former Wall Street trader Nassim Taleb, who argues that the only truly important events in the course of history have been completely unpredictable. He calls these “black swan events.” (Shortform note: In The Black Swan, Taleb describes another characteristic of black swan events that Tetlock and Gardner...
This is the best summary of How to Win Friends and Influence PeopleI've ever read. The way you explained the ideas and connected them to other books was amazing.
Let’s test the theory that unpredictable events make a bigger impact than gradual change.
Think of a black swan event in your own life (remember, black swans are events that seem out-of-the-blue and have lasting consequences). This could be an event that strongly impacted your life in positive, negative, or neutral ways. Briefly describe the event and its most important consequences.
It’s clear by now that forecasting is important, even if there is debate about the relative importance of predictable events. The authors believe that forecasting tournaments are particularly important because they provide opportunities for superforecasters to sharpen their skills and for researchers to test theories about what makes some forecasters more accurate than others.
But forecasting is not always about accuracy. In reality, forecasters (especially those in the public eye) may have other goals for their forecasts. If the goal is to provoke a person or group or to draw attention to a cause, being right is an afterthought. (Shortform note: We can see this in the case of doomsday predictions. Many of the people who predicted the end of the world on a particular date were religious leaders who were more concerned with attracting followers to their cause than with the accuracy of their predictions.)
According to the authors, the field of forecasting is facing another challenge in addition to concerns about accuracy: Namely, the idea that the questions people really care...
This is the best summary of How to Win Friends and Influence PeopleI've ever read. The way you explained the ideas and connected them to other books was amazing.