How good are you at predicting the outcomes of your decisions? What’s a simple way to improve this skill?
In Doing Good Better, William MacAskill uses the concept of expected value to help readers make more informed decisions about their altruistic efforts. He explains how this principle, commonly used in betting and poker, can be applied to career choices and charitable giving.
Read on to discover how to calculate expected value to make better decisions.
Calculating Expected Value
The effective altruism principles of maximizing the net benefit and assessing the counterfactual both assume that you can predict the consequences of every course of action. But MacAskill points out that this assumption is often false—if you go into cancer research, for example, it’s impossible to know whether your research will yield a massive breakthrough. For this reason, he recommends that you calculate the expected value of your possible actions to decide which to perform. He uses practical examples to explain how to calculate expected value.
MacAskill explains that the concept of expected value is standard in betting, where it allows bettors to decide which bets to take. For example, imagine that someone offers you a bet on the 50% chance that a coin lands on heads—if it does, they’ll pay you $200, and if it doesn’t, you pay them $100. To calculate the expected value, you simply multiply the outcomes by the probability that they occur, then subtract the expected money lost from the expected money won. In this case, your expected value is ($200 x 0.5) – ($100 x 0.5) = $50. In other words, taking the bet nets you an expected value of $50, so you should take it.
(Shortform note: In a similar vein, the notion of expected value is central to poker theory. Poker experts note that, to decide whether to bet on a given hand—and if so, how much to bet on a given hand—it’s crucial to calculate the expected value of your hand. For example, if you determine that your hand has negative expected value (meaning that you’ll lose money on average if you play it), it likely makes sense to fold.)
In the context of effective altruism, calculating expected value can help you decide between risky careers. For example, imagine you’re choosing between becoming a cancer researcher and a climate change researcher. You might think that, if you become a cancer researcher, your chance of a massive breakthrough (say, one that yields one million QALYs) is about 1%, and the other 99% of the time you’ll have a modest career that saves 1,000 QALYs. By contrast, you might think that if you become a climate change researcher, there’s a 2% chance of a massive breakthrough (say, one that saves 600,000 QALYs), but the other 98% of the time you’ll make no progress (0 QALYs).
First, the expected value of becoming a cancer researcher is (0.01 x 1,000,000 QALYs) + (0.99 x 1,000 QALYs) = 10,990 QALYs. On the other hand, the expected value of becoming a climate change researcher is (0.02 x 600,000 QALYs) + (0.98 x 0 QALYs) = 12,000 QALYs. So, this (oversimplified) calculation would suggest that it’s better to become a climate change researcher, since that route has a higher expected value.
(Shortform note: Although expected value can inform your decision of which career to pursue, effective altruists are careful to note that expected value isn’t the only relevant consideration. For example, they argue that you should normally avoid intentionally doing harm in your career, even if a harmful action has positive expected value—you shouldn’t work in an immoral but lucrative field simply to donate your salary to charity, for instance.)