This article is an excerpt from the Shortform book guide to "Algorithms to Live By" by Brian Christian and Tom Griffiths. Shortform has the world's best summaries and analyses of books you should be reading.
Like this article? Sign up for a free trial here .
How do you make a decision when everything is uncertain? Is there a way to predict the most likely outcome of a decision?
One of the biggest obstacles preventing us from making good decisions is our inability to reliably predict the future. For example, it’s easy to decide whether or not to quit your job if you know you’ll get a raise within the next three months. As it is, the uncertain world prevents us from making decisions with confidence.
Here is how to make good decisions in the face of uncertainty.
Making Decisions in the Face of Uncertainty
According to Christian and Griffiths, the authors of Algorithms to Live By, all random events fall into one of three categories, and by first determining what kind of event it is, you have a much greater chance of predicting its outcome and making a better decision in the face of uncertainty. Random events can either have:
- A normal, or “Gaussian” distribution
- A power-law distribution
- An Erlang distribution
We’ll take a look at each of these in turn, explaining when and how to use them in your decision-making:
Rule #1: If it’s a normal distribution, predict the average, then adjust.
As Christian and Griffiths explain, events that follow a normal distribution create a “bell curve”—the overwhelming majority of outcomes fall within a small range, with rare extremes falling on either side of that range. Shoe size, IQ, and human height all follow a normal distribution.
Normal distributions are the easiest to predict. Christian and Griffiths advise that since the majority of outcomes are the same, simply predict the average, and you’ll be right the majority of the time. For example, if you’re trying to guess someone’s IQ, you should just estimate the average score of 100.
If evidence indicates that an outcome might not fit the average, you should adjust your prediction while still keeping it heavily weighted toward the average. To expand on our example, if you discover that someone has a PhD, you might reasonably guess that they have an IQ of around 110.
Rule #2: If it’s a power-law distribution, put more weight on existing evidence.
In contrast, Christian and Griffiths explain that events that follow a power-law distribution don’t cluster around an average outcome. Instead, a few outcomes are so extreme that calculating the average of the entire group is unlikely to accurately describe any individual. Website views, record sales, and individual income follow a power-law distribution—there will consistently be a few websites, records, and individuals who vastly outpace the rest of the herd.
Christian and Griffiths assert that the more extreme a sample initially seems, the more extreme you can expect it to be in the future. This is the most important thing to remember when attempting to predict events that follow a power-law distribution.
For example, imagine you were asked to predict the likelihood of a YouTube video hitting a million views. A video with 40,000 views will be far more likely to reach a million than one with 400 views. Since they vary so wildly, power-law distributions require you to rely far more on observable data for your predictions than normal distributions.
The Copernican Principle
Typically, it’s difficult to make accurate predictions of power-law distributions without a good amount of background knowledge. For example, to accurately predict how many views a YouTube video will have, you’ll at least need to know the rate at which they typically grow, if not more specific details about the content itself. However, according to Christian and Griffiths, the exception is when you’re predicting how long something will last.
Christian and Griffiths offer up a helpful rule of thumb called the Copernican Principle: When predicting the longevity of something that follows a power-law distribution, multiply its current age by two.
The Copernican Principle states that we’re unlikely to be observing something at a special point in its lifespan—for example, the first or last year of a nation that will last a thousand. Instead, on average, we’re most likely to arrive exactly halfway through any phenomenon.
We can use this principle to predict the lifespan of anything that follows a power-law distribution: anything that could either last for either an extremely long time or an extremely short time (for instance, companies, technologies, customs, and nations). A start-up that was founded a month ago is likely to only last a month more. One that was founded four years ago has proven itself to be viable and is likely to last for around four more years. A company founded a hundred years ago has become a mainstay, and it’s likely to be around for a hundred more.
The Copernican Principle as a Judge of Quality In Skin in the Game, Nassim Taleb writes extensively on an important aspect of the Copernican Principle that Christian and Griffiths don’t cover. Taleb argues that the Copernican Principle can be used to judge the quality and effectiveness of anything nonperishable—in fact, he is adamant that it is the only way to objectively judge quality. (In his book, Taleb refers to the Copernican Principle as the “Lindy effect,” named after Lindy’s deli in New York, where Broadway actors realized you could accurately predict that the oldest shows would last the longest.) Taleb argues that in systems where the least effective components are eliminated, such as a competitive market, the fact that something has lasted a long time means that it’s good at accomplishing its purpose. This means that it will continue to last because it’s proven to be well-designed. By this logic, you can use the Copernican Principle in your own life to predict the quality of nearly anything. For example, if a movie is still famous, the older it is, the better it probably is. Taleb asserts that the Copernican Principle’s test of time is the only trustworthy judge of quality—he’s skeptical of all subjective human judgment. In his eyes, humans rate quality based on superficial, often misleading appearances. Only after time has swept away everything without lasting quality can we objectively know what is effective. |
Rule #3: If it’s an Erlang distribution, ignore observable evidence.
Events that follow the Erlang distribution have an equal probability of happening at any time. Christian and Griffiths explain that strings of entirely independent events, such as consecutive spins on a roulette wheel or callers to a customer service line, follow an Erlang distribution.
In contrast to the other probability distributions, observable data in an Erlang distribution should have no impact on your predictions. Christian and Griffiths note that you can calculate the outcome that is most likely to occur—for example, you can predict that a roulette wheel should land on black in the next one or two spins—but this prediction should never change, no matter what the evidence seems to indicate. Even if a roulette wheel hits red ten times in a row, you should still predict that it will hit black in the next one or two spins—the probability of each outcome stays the same, no matter what.
Christian and Griffiths explain that many people mistakenly apply normal or power-law distribution prediction strategies to games like roulette that follow an Erlang distribution. If they expect that their winnings will equalize toward an average (normal distribution), they’ll continue playing after losing money, convinced that their luck is due to turn around. On the other hand, if they expect that big wins indicate a streak of luck that is likely to continue (power-law distribution), they’ll seek to capitalize on “hot streaks” and continue playing after wins.
(Shortform note: The former irrational strategy is known in psychology as the “gambler’s fallacy,” while the latter is known as the “hot hand fallacy.”)
Rule #4: If you’re unsure, go with your gut.
Christian and Griffiths find that people do a great job at internalizing the differences between distribution categories and instinctively utilizing the appropriate prediction strategy. Studies show that people can make estimates that are extremely close to the results of large-scale data analysis.
Christian and Griffiths interpret this to mean that humans are extremely good at accumulating knowledge that will help us make better predictions in the future, and we do so automatically. With this in mind, if you’re unable to identify a distribution pattern, trust your instincts for a fairly accurate prediction.
However, we shouldn’t blindly trust our instincts, as they’re far from infallible. Christian and Griffiths warn that warped beliefs and false perceptions can easily lead us to inaccurate predictions. For example, some argue that social media distorts life to seem far more exciting and carefree than it actually is since people typically only post the highlights of their lives. In theory, this causes users to instinctually predict that their lives will also be continually exciting and carefree (and become bored and dissatisfied when this prediction inevitably doesn’t come true).
How to Make Good Decisions in the Face of Uncertainty: Should You Trust Your Intuition? Christian and Griffiths conclude that human instinct is surprisingly rational and precise, despite the fact it operates entirely subconsciously. Other experts agree, advocating for a greater reliance on gut instinct in disciplines that typically eschew it. For instance, psychologist Gerd Gigerenzer argues that intuition is one of the most practical tools at our disposal to help us navigate the complex, unpredictable world. Intuition is for the most part based on heuristics—extremely simple rules of thumb that we follow either consciously or unconsciously. Even if we don’t understand it, our gut feelings are based on some kind of internal logic, and we can comfortably rely on these heuristics to guide us in times of uncertainty. Gigerenzer agrees with Christian and Griffiths that intuition isn’t infallible—however, he asserts that in many cases, it’s the best option we have. Often, intuition is far less error-prone than complex data-driven decision-making models. Because these models require strings of exact calculations, they suffer many more opportunities for error than a simple heuristic does. For example, German tennis amateurs were able to predict the outcomes of the Wimbledon tournament better than complex algorithms by simply guessing that whoever they recognized would win. |
———End of Preview———
Like what you just read? Read the rest of the world's best book summary and analysis of Brian Christian and Tom Griffiths's "Algorithms to Live By" at Shortform .
Here's what you'll find in our full Algorithms to Live By summary :
- How to schedule your to-do list like a computer
- Why making random decisions is sometimes the smartest thing to do
- Why you should reject the first 37% of positions in your job search