What are the negative effects of AI? Could AI overpower humans?
In Life 3.0, Max Tegmark takes a look at some of the existential dangers that artificial superintelligence poses. There are three scenarios in which the AI’s goal ruins humans’ chances to live a satisfying life.
Discover the top three negative effects of AI in the future.
Possibility #1: AI Kills All Humans
Tegmark contends one of the negative effects of AI is that an artificial superintelligence may end up killing all humans in service of some other goal. If it doesn’t value human life, it could feasibly end humanity just for simplicity’s sake—to reduce the chance that we’ll do something to interfere with its mission.
(Shortform note: Some argue that an artificial superintelligence is unlikely to kill all humans as long as we leave it alone. Conflict takes effort, so an AI might conclude that the simplest option available is to pursue its mission in isolation from humanity. For instance, a superintelligence might be peaceful toward us if we don’t interfere with its goal and allow it to colonize space.)
If an artificial superintelligence decided to drive us extinct, Tegmark predicts that it would do so by some means we currently aren’t aware of (or can’t understand). Just as humans could easily choose to hunt an animal to extinction with weapons the animal wouldn’t be able to understand, an artificial intelligence that’s proportionally smarter than us could do the same.
(Shortform note: In Superintelligence, Nick Bostrom imagines one way an artificial intelligence could kill all humans through means we would have difficulty understanding or averting: self-replicating “nanofactories.” This would be a microscopic machine with the ability to reproduce and synthesize deadly poison. Bostrom describes a scenario in which an artificial intelligence produces and spreads these nanofactories throughout the atmosphere at such a low concentration that we can’t detect them. Then, all at once, these factories turn our air toxic, killing everyone.)
Possibility #2: AI Cages Humanity
Another possibility is that an artificial intelligence chooses to keep humans alive, but it doesn’t put in the effort to create a utopia for us. Tegmark argues that an all-powerful superintelligence might decide to keep us alive out of casual curiosity. In this case, an indifferent superintelligence would likely create a relatively unfulfilling cage in which we’re kept alive but feel trapped.
(Shortform note: A carelessly, imperfectly designed world for humans to live in may be intolerable in ways we can’t imagine. This idea is dramatized by the ending of Stanley Kubrick’s 1968 film 2001: A Space Odyssey. As Kubrick explains in an 1980 interview, the film portrays an astronaut trapped in a “human zoo” created for him by godlike aliens who want to study him. They place the astronaut in a room in which it feels like all of time is happening simultaneously, and he ages and dies all at once.)
Possibility #3: Humans Abuse AI
Finally, Tegmark imagines a future in which humans gain total control over an artificial superintelligence and use it for selfish ends. Theoretically, someone could use such a machine to become a dictator and oppress or abuse all of humanity.
(Shortform note: This scenario would likely result in even more suffering than if a supreme AI decided to kill all humans. Paul Bloom (Against Empathy) asserts that cruelty is a uniquely human act in which someone feels motivated to punish other humans for their moral failings. Thus, a superintelligence under the control of a hateful dictator would be far more likely to intentionally cause suffering (as moral punishment) than an AI deciding for itself what to do.)
———End of Preview———
Like what you just read? Read the rest of the world's best book summary and analysis of Max Tegmark's "Life 3.0" at Shortform.
Here's what you'll find in our full Life 3.0 summary:
- That an artificial intelligence evolution will make us the simple lifeforms
- The evidence that artificial superintelligence might soon exist
- Whether or not we should be alarmed about the emergence of AI