What is effective altruism? How might it help you make the biggest positive impact on the world?
In Doing Good Better, William MacAskill explores the concept of effective altruism. He outlines key principles for making informed decisions about how to use your resources to help others most effectively. The book offers practical guidance for those seeking to make a meaningful difference in the world.
Read on to discover what effective altruism is and how you can apply its principles to your life.
The Principles Behind Effective Altruism
What is effective altruism? In his book, MacAskill lays out its theoretical foundations, identifying three principles that lie at its heart. First, that our actions should maximize the net benefit to humanity; second, that we should reason counterfactually to decide which actions really make a difference; and third, that we should calculate the expected value of different actions when their consequences are unclear.
Principle #1: Maximize the Net Benefit to Humanity
First, MacAskill points out that each of us is faced with decisions about how to spend our time and money—should we splurge on a nice dinner, or eat at home and donate the money that we saved? Should we use our time to volunteer at a soup kitchen or an afterschool program? To answer these questions, he suggests that we consult the first principle of effective altruism: Perform the action that maximizes the net benefit to humanity.
To see how intuitive this principle is, consider wartime doctors who are forced to decide which patients to treat. Inevitably, they can’t treat everyone—some patients have fatal injuries beyond treatment. Further, other patients deserve priority treatment—if forced to choose between caring for one patient who will die without treatment, and five with minor injuries who will survive regardless, it makes sense to treat the former. In other words, these doctors try to treat patients in a way that maximizes lives saved since their resources are limited.
MacAskill suggests that, analogously, we should use our resources to maximize net good. For example, although we aren’t dividing our time between wartime patients, we do divide our time between different pursuits; some of us might choose to volunteer at a blood drive, while others might choose not to volunteer. Likewise, some of us might choose to spend our money donating to international charities, while others choose to spend it on luxury cars. In both cases, our guiding principle should involve maximizing the total good we’re doing for humanity.
How to Use Quality-Adjusted Life Years to Maximize the Good to Humanity
Nonetheless, MacAskill acknowledges that it’s difficult to measure the impact of different actions—for instance, how should we determine the impact of funding a charity that distributes malaria nets versus one that funds sex education programs? To do so, he recommends that we estimate the quality-adjusted life years (QALYs) associated with a given action to inform our decision.
QALYs, MacAskill relates, refer to a combination of the time that we can extend someone’s life multiplied by the quality of that life during that time (typically measured on a subjective scale of one to 10). For example, suppose that medication for AIDS could increase someone’s quality of life from a three out of 10 to an eight out of 10 over five years. Then, this medication would have a net increase of five out of 10 (50%) times five years, meaning it yields 2.5 QALYs.
MacAskill says that by estimating the QALYs associated with different donations and actions, we can estimate which yields the greatest benefit. For example, imagine that you’re deciding between donating $5,000 to an organization fighting homelessness and donating $5,000 to an organization that funds cochlear implants for deaf people. You might estimate that $5,000 donated to the organization for homelessness will yield 0.3 QALYs times two years (0.6 total QALYs), while $5,000 donated to the organization for cochlear implants will yield 0.2 QALYs times six years (1.2 total QALYs). In this case, you should donate to the organization for deaf people, as it delivers a greater increase in QALYs.
Principle #2: Assess the Counterfactual
Although estimating the benefit of our actions is an important aspect of effective altruism, it isn’t the only one. MacAskill argues that effective altruism also requires assessing the counterfactual—that is, determining what would have happened if we had taken a different course of action.
Counterfactual reasoning is common in everyday life, writes MacAskill. For example, if you spent $40 getting drinks with friends, you might reason that you could’ve instead put that money toward a monthly gym membership. Likewise, in the context of effective altruism, you might reason that if you hadn’t given your money to charity A, you could have instead given it to charity B, or if you hadn’t volunteered at organization X, you could have instead volunteered at organization Y.
The upshot is that you should perform the action that maximizes your benefit relative to the counterfactual. For example, imagine that you’ve decided to become a doctor out of a desire to help needy patients. Although this might seem like an effective way to help others, MacAskill points out that it’s actually not especially effective, because if you hadn’t become a doctor, someone else would have simply taken your place. In other words, you aren’t treating patients who otherwise would receive no treatment; you’re treating patients who would have otherwise received treatment from someone else.
By contrast, if you became a doctor to donate most of your salary to charity, that would fare better under the counterfactual test—after all, the other person who would’ve gotten your job as a doctor likely wouldn’t have donated as much.
Principle #3: Calculate Expected Value
The principles of maximizing the net benefit and assessing the counterfactual both assume that you can predict the consequences of every course of action. But MacAskill points out that this assumption is often false—if you go into cancer research, for example, it’s impossible to know whether your research will yield a massive breakthrough. For this reason, he recommends that you calculate the expected value of your possible actions to decide which to perform.
MacAskill explains that the concept of expected value is standard in betting, where it allows bettors to decide which bets to take. For example, imagine that someone offers you a bet on the 50% chance that a coin lands on heads—if it does, they’ll pay you $200, and if it doesn’t, you pay them $100. To calculate the expected value, you simply multiply the outcomes by the probability that they occur, then subtract the expected money lost from the expected money won. In this case, your expected value is ($200 x 0.5) – ($100 x 0.5) = $50. In other words, taking the bet nets you an expected value of $50, so you should take it.
In the context of effective altruism, calculating expected value can help you decide between risky careers. For example, imagine you’re choosing between becoming a cancer researcher and a climate change researcher. You might think that, if you become a cancer researcher, your chance of a massive breakthrough (say, one that yields one million QALYs) is about 1%, and the other 99% of the time you’ll have a modest career that saves 1,000 QALYs. By contrast, you might think that if you become a climate change researcher, there’s a 2% chance of a massive breakthrough (say, one that saves 600,000 QALYs), but the other 98% of the time you’ll make no progress (0 QALYs).
First, the expected value of becoming a cancer researcher is (0.01 x 1,000,000 QALYs) + (0.99 x 1,000 QALYs) = 10,990 QALYs. On the other hand, the expected value of becoming a climate change researcher is (0.02 x 600,000 QALYs) + (0.98 x 0 QALYs) = 12,000 QALYs. So, this (oversimplified) calculation would suggest that it’s better to become a climate change researcher, since that route has a higher expected value.