This article is an excerpt from the Shortform book guide to "Superforecasting" by Philip E. Tetlock. Shortform has the world's best summaries and analyses of books you should be reading.
Like this article? Sign up for a free trial here .
What is probabilistic thinking? How do probabilistic thinkers approach problems differently from an average person?
Probabilistic thinking is essentially an approach to predicting an outcome of a situation or the likelihood of a future event. Most people tend to estimate probabilities in terms of “yes” or “no” options. In contrast, probabilistic thinkers think in terms of percentages.
Keep reading to learn about the psychology of probabilistic thinking.
What Is Probabilistic Thinking?
If you were asked to predict whether a certain event would happen in the future, you’d probably respond with one of three answers: yes, no, or maybe. Most people’s mental “probability dial” has three distinct settings. By contrast, probabilistic thinkers have an unlimited number of settings. They’re more likely to answer questions in terms of percentages rather than “yes” or “no.”
The Two- (or Three-) Setting Dial
There is a good reason that most of us are not natural probabilistic thinkers. For most of human history, the three-setting dial was reduced even further to two settings (“yes” or “no”). For our ancestors, this was an advantage. Early humans lived in a world where predators were a constant threat—but our brains and bodies aren’t designed for perpetual vigilance, and stress wears us down over time. Snap judgments became an evolutionary life hack: While the probabilistic thinkers were fretting over the likelihood that a strange noise came from a predator, the concrete thinkers had already landed on an answer and responded accordingly.
Whether there is a real threat present or not, a correct guess has a distinct advantage. If the sound did come from a predator, the concrete thinkers have more time to prepare to fight or flee; If the sound didn’t come from a predator, they can lay back, relax, and conserve the cognitive resources their probabilistic peers spend on worrying. Even a false positive is relatively harmless. Only in the case of a miss (there is a predator, but you decide there isn’t) would the concrete thinkers be in trouble.
But what about “maybe”? For life-or-death decisions, “maybe” is not particularly helpful. Most of us use “maybe” as a last resort, only when the odds are roughly even and the stakes are low. The uncertainty that comes with a “maybe” answer is intuitively unsettling, possibly because we have evolved to associate uncertainty with danger. We settle for maybe only when we’re forced to—usually for probabilities roughly between 40% and 60%. Anything higher is “yes,” anything lower is “no.”
Does this sound wrong or overly reductive? There’s a reason for that. We do much better when we encounter probabilities as abstractions (random numbers with no context, like the 40% and 60% in the paragraph above). But in real-life situations, most of us revert to that intuitive two-setting dial. For example, when you check the weather forecast and see “80% chance of rain,” do you think about the 20% chance of clear skies? Or do you grab your umbrella and carry on with your day assuming that it will rain?
Probabilistic Thinking and Uncertainty
Human minds crave certainty. Uncertainty creates anxiety—somewhere deep in the brain, we still interpret uncertainty as the chance that a lion is right behind us. Researchers have tested this by asking parents how much they would hypothetically pay for a treatment that would reduce their child’s risk of contracting a serious illness from 10% to 5% or from 5% to 0%. Even though the reduction is the same magnitude for both cases, parents were willing to pay up to three times more to reduce the risk from 5% to 0%. Certainty is priceless.
The problem is that certainty is also very rare. Few things have either a 0% or 100% chance of happening. But claims of certainty are common and powerfully compelling. If someone claims that something will definitely happen or that a treatment will definitely work, we equate that confidence with accuracy (there is a small positive correlation between the two, but we estimate it to be much larger). This means we’re much more likely to trust confident people—but because certainty is so rare, they’re also much more likely to be wrong.
This skewed perspective impacts the way we think about science. Many people think of science as the ultimate pillar of truth and that scientific facts are facts, full stop. In this worldview, scientists are the ultimate heroes in the fight against uncertainty. We tend to forget that at one point, the scientific community was certain the Earth was flat.
(Scientists themselves are partly to blame here, as they often do speak in terms of certainties. They can do this because, among scientists, there is a general understanding that “this is fact” should be taken to mean “this is the conclusion that the evidence supports, but there is still a tiny possibility that it could be wrong.” But among laypeople, the word “fact” means an absolute truth.)
Two- and three-setting mental dials helped our species survive into the modern era, and they are still helpful when snap decisions are necessary. But thankfully, most of the judgments we make on a daily basis are not immediate, life-or-death decisions. For everything else, the most accurate answer is “maybe.” This is where probabilistic thinkers shine—they see “maybe” not as an answer in itself but as an infinite range of possible answers.
The Hunt for bin Laden
History provides a useful example of the tension between probabilistic and concrete thinking. In 2011, the intelligence community identified a compound in Pakistan that they suspected housed Osama bin Laden, the terrorist responsible for the 9/11 attacks. There was enough evidence to suggest that bin Laden was in the compound—but not enough for analysts to be completely certain. Leaders from several IC agencies gathered to debate the evidence and come up with a forecast that would ultimately be presented to President Barack Obama, who would decide whether to authorize a raid on the compound.
Although different accounts of that historic conversation vary on the details, we know that multiple IC analysts presented their personal confidence levels to the president based on the data available. These confidence levels ranged from 30% to 95%. Averaged together, the group was around 70% certain that the man in that compound was bin Laden. President Obama responded, “This is fifty-fifty. Look guys, this is the flip of the coin.”
Given the context, it’s unlikely that the president meant there was a literal 50% likelihood of bin Laden being in the compound. More likely, he used “fifty-fifty” as a synonym for “maybe.” This illustrates the fundamental difference between probabilistic and concrete thinking: While the analysts were adjusting percentages, the president focused on accepting the inherent uncertainty of the situation and moving on. His three-setting dial was set to “maybe,” and without enough evidence to move it to “yes” or “no,” it was no longer useful to him.
Another possible explanation is that President Obama mistook the lack of consensus among the analysts as a sign of unreliability. Agreement gives the illusion of certainty—we believe that if everyone in the group comes to the same conclusion, it must be the right conclusion. This is not only untrue but also a dangerous sign of groupthink. Think back to the wisdom of the crowd: Everyone has a different piece of the puzzle. That diversity is to be expected because it creates a stronger collective answer.
Chance vs. Fate
For most of us, the tension between probabilistic thinking and concrete thinking is rarely as dramatic as deciding whether to launch a raid that may or may not end a ten-year hunt for a notorious terrorist. But the number of settings on our mental probability dials still has important consequences, both for our future decisions and for the way we interpret the past.
Probabilistic thinking doesn’t just apply to future events. When we look back at the events in our own lives, our instinctive pattern-recognition skills kick in, connecting the dots between past experiences. Ultimately, what we’re looking for is meaning. This is a deeply human instinct—being able to find a sense of meaning in our lives is an adaptive skill that can even help us survive traumatic events.
Meaning is a fundamental need because it provides a sense of certainty. We loathe uncertainty and struggle to understand randomness—finding an explanation for seemingly random events is soothing. If there was a meaningful pattern all along, then maybe we aren’t just vulnerable little beings in a terrifyingly random universe where anything can happen. Meaning makes us feel safe.
But safety and certainty are the opposite of probabilistic thinking. Truly probabilistic thinking involves being able to look at our own past as a series of random events, any one of which may have drastically changed our lives if it had gone differently. In that paradigm, nothing is “meant to be.” Confronting that idea is scary, and it quickly separates the probabilistic thinkers from the meant-to-be thinkers.
- For example, imagine you asked a crowd of married people to reflect on all the little “what if” scenarios in their lives that, if they had gone differently, would have prevented them from meeting their spouse. This is called counterfactual thinking. Meant-to-be thinkers will add up all the things that could have gone differently and determine that meeting their spouse was “destiny” precisely because it was so unlikely. But probabilistic thinkers will look at the same series of events and simply think, “Wow, lucky me!”
- That conclusion seems flippant but is actually highly logical. If you trace the thinking of the “meant-to-be” group, it creates a paradox: “Given how many things could have gone differently, the probability of us meeting was extremely small, maybe a fraction of a percent. But we did meet. This means our meeting was predestined. And things that are destined to happen have a probability of 100%.”
———End of Preview———
Like what you just read? Read the rest of the world's best book summary and analysis of Philip E. Tetlock's "Superforecasting" at Shortform .
Here's what you'll find in our full Superforecasting summary :
- How to make predictions with greater accuracy
- The 7 traits of superforecasters
- How Black Swan events can challenge even the best forecasters