What is the future of humanity? Are we on the brink of disaster—or at the cusp of unprecedented progress?
Toby Ord’s perspective on humanity’s potential and risks offers a thought-provoking glimpse into our species’ trajectory. He explores our remarkable achievements, technological advancements, and the looming threats that could derail our progress.
Keep reading for a journey through time, technology, and the human condition as we explore what the future of humanity might hold.
The Potential Future of Humanity
What is the future of humanity? Ord believes that our potential is enormous—enough to end disease, poverty, and injustice if we can harness it correctly. He believes that, although we’re in our infancy as a species, our advances and achievements are impressive: We’ve drastically increased the average lifespan and literacy rate and eliminated extreme poverty for most of the population. Adding to this, humanity has only been around for a fraction of the time Earth has existed, and an even smaller fraction of the time Earth has yet to exist—if we continue to evolve for the better, the future will be bright.
(Shortform note: In Sapiens, Yuval Noah Harrari reiterates the amazing progress humans have made in their short time on earth. However, he warns that, if humans further evolve by engineering our bodies and DNA (which he calls intelligent design) rather than through natural selection, we’ll create major inequality. The rich and powerful who gain access to technology like biological, cyborg, and inorganic life engineering or DNA mapping will be objectively superior to the rest of the species. To avoid this fate, Harrari makes the same recommendation as Ord does (which we’ll discuss more in the following paragraphs)—we must establish knowledge and altruism as a present and future value for our species rather than simply chasing development at whatever cost.)
However, Ord warns that we’ve reached a critical point in our development—the rate of our technological growth has outpaced our wisdom, and we’re unprepared for the risks associated with this unbalanced growth. Our lack of wisdom (which amounts to a lack of foresight and altruism) could precipitate the downfall of our civilization: We put our own interests first and are unable to foresee and plan for the results of our actions, leaving us vulnerable. For example, our materialism, desire for power, and disregard for long-term consequences have already brought us close to disasters like nuclear war and irreversible environmental damage.
(Shortform note: In Brief Answers to the Big Questions, Stephen Hawking offers some solace to the problem of our technology growth outpacing our wisdom. While he agrees with Ord that this phenomenon could produce disaster, he believes that information technology may accelerate the rate of human evolution; in particular, Hawking notes that evolution has been characterized by information transfer through genes; however, today we also have written language and advanced methods of storing, accessing, and sharing information due to digital technology. Thus, technology itself is a new mechanism of evolution and may be one way to address the problem Ord posits regarding humanity’s lack of wisdom.)
Our Potential for Disaster
Ord believes there’s a one in six chance of humanity facing an existential catastrophe in the 21st century—an event that we’ll be unable to recover from—for example, a natural disaster that wipes out the species or a pandemic that leaves the global population infertile.
(Shortform note: While Ord estimates that there’s a roughly 16% chance of an existential catastrophe occurring in the next century, opinions from other scientists vary. Estimates range from a less than 5% chance of extinction before the year 5100 to a 50% chance of extinction before the end of the 21st century.)
To ensure our survival and secure our future, Ord advocates for immediate action: managing today’s risks while averting those of tomorrow, and making choices that will benefit future generations. Right now, we struggle to adopt this strategy due to economic, political, and psychological factors.
First, economic theory suggests that markets undervalue potential existential risks because focusing on this issue won’t benefit particular entities or nation-states. Instead, it benefits the global population as a whole which, while serving as a public good, isn’t profitable.
(Shortform note: To rectify the problem of markets undervaluing existential risks, we can look to ideas such as government intervention and the creation of global organizations like the Global Coalition for Social Justice, according to some economists. Governments could regulate and incentivize businesses to give more weight to these risks—for example, by offering tax breaks to companies that invest in risk prevention. Alternatively, forming global coalitions could distribute the cost of addressing these risks across nations, making it more economically viable for individual entities.)
Second, politicians prioritize short-term actions over long-term solutions due to election and news cycles—people focus on and support actions with immediate benefits over actions that take years to provide a profit. They’re reluctant to address long-term problems like existential risk prevention without a constituency pushing for early action, something humanity doesn’t have enough of.
(Shortform note: Studies on climate policy suggest that raising public awareness is one of the most effective ways to change political priorities because it engages citizens in decision-making and encourages behaviors that align with long-term policy objectives—for example, regarding the existential risk of climate change, citizens can monitor changes in their local environment. This not only equips them with knowledge but also encourages them to become drivers of behavioral change within their communities.)
Third, says Ord, psychological biases also play a role in our neglect of these significant dangers. For instance, people tend to see existential catastrophes as unlikely to occur because they’ve never happened in the past. However, with unprecedented events like an existential catastrophe, this heuristic fails us since we have no experience with such disasters until it’s too late. Further, Ord argues that we overlook these risks because they’re new concepts in human history—we haven’t had enough time yet as a society or species to incorporate them into our moral and civic traditions.
(Shortform note: Ord says we neglect risk prevention because we’ve never experienced existential catastrophe and therefore don’t see one as likely to occur in the future. However, we might be able to counter this heuristic with another psychological phenomenon called “media priming.” Media priming suggests that our behavior can be impacted by the media we consume—for example, watching loads of doomsday movies where the world is destroyed may desensitize us to such issues. However, an increase in inspirational or educational films about how humanity recognizes and overcomes major risks may make people take these risks more seriously and encourage prevention behaviors.)
Exercise
Do you agree with Ord that humanity is on the precipice of an existential crisis? Why or why not?