Can humanity survive AI? What can you do to prevent an AI takeover?
Some of the possible outcomes of AI include salvation or disaster. However, all those scenarios are merely theoretical if we can address them today to help create a better future.
Let’s look at how to survive AI, according to Life 3.0 by Max Tegmark.
Short-Term Concerns
The rise of an artificial superintelligence isn’t the only thing we have to worry about. According to Tegmark, it’s likely that rapid AI advancements will create numerous challenges that we as a society need to manage in order to learn how to survive AI. Let’s discuss:
- Concern #1: Economic inequality
- Concern #2: Outdated laws
- Concern #3: AI-enhanced weaponry
Concern #1: Economic Inequality
First, Tegmark argues that AI threatens to increase economic inequality. Generally, as researchers develop the technology to automate more types of labor, companies gain the ability to serve their customers while hiring fewer employees. The owners of these companies can then keep more profits for themselves while the working class suffers from fewer job opportunities and less demand for their skills. For example, in the past, the invention of the photocopier allowed companies to avoid paying typists to duplicate documents manually, saving the company owners money at the typists’ expense.
As AI becomes more intelligent and able to automate more kinds of human labor at lower cost, this asymmetrical distribution of wealth could increase.
Concern #2: Outdated Laws
Second, Tegmark contends that our legal system could become outdated and counterproductive in the face of sudden technological shifts. For example, imagine a company releases thousands of AI-assisted self-driving cars that save thousands of lives by being (on average) safer drivers than humans. However, these self-driving cars still get into some fatal accidents that wouldn’t have occurred if the passengers were driving themselves. Who, if anyone, should be held liable for these fatalities? Our legal system needs to be ready to adapt to these kinds of situations to ensure just outcomes while technology evolves.
Concern #3: AI-Enhanced Weaponry
Third, AI advancements could drastically increase the killing potential of automated weapons systems, argues Tegmark. AI-directed drones would have the ability to identify and attack specific people—or groups of people—without human guidance. This could allow governments, terrorist organizations, or lone actors to commit assassinations, mass killings, or even ethnic cleansing at low cost and minimal effort. If one military power develops AI-enhanced weaponry, other powers will likely do the same, creating a new technological arms race that could endanger countless people around the world.
Long-Term Concerns
How should we address the long-term concerns related to AI, including potential superintelligence creation? Because there’s little we know for sure about the future of AI, Tegmark contends that one of humanity’s top priorities should be AI research. The stakes are high, so we should try our best to discover ways to control or positively influence an artificial superintelligence.
The Current State of AI Research Funding It seems that many people agree with Tegmark, as major institutions around the world are already prioritizing AI research. The European Union is currently investing €1 billion a year in AI research and development, and it intends to increase that annual investment to €20 billion by the year 2030. Private companies are leading AI research in the United States—for instance, Meta plans to spend $33 billion on AI research in 2023 alone. However, some experts worry that private companies may not act in line with public interest while researching AI, and they urge the US government to fund a cutting-edge AI research program of its own. It’s possible that state-controlled research would be less likely to unleash a dangerous superintelligence, as governments lack the profit motive to create a marketable AGI as quickly as possible. |
What Can You Do to Prevent the AI Apocalypse?
Outside of AI research, Tegmark recommends cultivating hope grounded in practical action. Before we can create a better future for humanity, we have to believe that a bright future is possible if we band together and responsibly address these technological risks.
After cultivating this optimistic attitude, Tegmark urges readers to do everything they can to make the world more ethical and peaceful—not just in the field of AI, but in every aspect of society. The more humans who are willing to empathize and cooperate with one another, the greater the chance that we’ll develop AI safely and with the intent to benefit all of humanity. This could involve organizing a fundraiser for a local homeless shelter, volunteering at a nursing home, or just being kinder to the people around you.
Determinate vs. Indeterminate Optimism The attitude Tegmark urges readers to adopt is what Peter Thiel in Zero to One calls “determinate optimism”—you expect the future to be better than the present, and you believe that you can successfully predict and bring about specific positive outcomes. Thiel argues that in contrast to this perspective, most Americans today (or in 2014, when Zero to One was written) think in terms of “indeterminate optimism”—they believe that things will get better in the future, but they assume that the future is too unpredictable for them to plan. According to Thiel, this is a problem because indeterminate optimism encourages people to be passive and short-sighted: They think, “Why bother planning a better future? Things will turn out OK no matter what I do.” To combat this, Thiel urges optimists to make long-term plans and stick to them. To apply this to Tegmark’s plea to make the world more ethical and peaceful: Don’t just become a generally cooperative person in life; instead, come up with a plan to bring people together and motivate them to treat others well. |
———End of Preview———
Like what you just read? Read the rest of the world's best book summary and analysis of Max Tegmark's "Life 3.0" at Shortform.
Here's what you'll find in our full Life 3.0 summary:
- That an artificial intelligence evolution will make us the simple lifeforms
- The evidence that artificial superintelligence might soon exist
- Whether or not we should be alarmed about the emergence of AI