How will AI change the world? What will AI’s goal be in the future?
A superintelligence’s “goal” is the primary factor that will determine how it will change the world. In Life 3.0, Max Tegmark explores how much power such a superintelligence would have if it greatly surpassed human capabilities.
Here’s how AI could change the world.
The Outcome Depends on the Superintelligence’s Goal
How will AI change the world? Tegmark asserts that if an artificial superintelligence comes into being, the fate of the human race depends on what that superintelligence sets as its goal. For instance, if a superintelligence pursues the goal of maximizing human happiness, it could create a utopia for us. If, on the other hand, it sets the goal of maximizing its intelligence, it could kill humanity in its efforts to convert all matter in the universe into computer processors.
It may sound like science fiction to say that an advanced computer program would “have a goal,” but this is less fantastical than it seems. An intelligent entity doesn’t need to have feelings or consciousness to have a goal; for instance, we could say an escalator has the “goal” of lifting people from one floor to another. In a sense, all machines have goals.
One major problem is that the creators of an artificial superintelligence wouldn’t necessarily have continuous control over its goal and actions, argues Tegmark. An artificial superintelligence, by definition, would be able to solve its goal more capably than humans can solve theirs. This means that if a human team’s goal was to halt or change an artificial superintelligence’s current goal, the AI could outmaneuver them and become uncontrollable.
Obstacles to Programming a Superintelligence’s Goal
Does this mean that we’re in the clear as long as we’re careful what goal we program into a superintelligence in the first place? Not necessarily.
First of all, Tegmark states that successfully programming an artificial superintelligence with a goal of our choosing would be difficult. While an AI is recursively becoming more intelligent, the only time we could program its ultimate goal would be after it’s intelligent enough to understand the goal, but before it’s intelligent enough to manipulate us into helping it accomplish whatever goal it’s set for itself. Given how quickly an intelligence explosion could happen, the AI’s creator might not have enough time to effectively program its goal.
Second, Tegmark argues that it’s possible for an artificial intelligence to discard the goal we give it and choose a new one. As the AI grows more intelligent, it might come to see our human goals as inconsequential or undesirable. This could incentivize it to find loopholes in its own programming that allow it to satisfy (or abandon) our goal and free itself to take some other unpredictable action.
Finally, even if an AI accepts the goals we give it, it could still behave in ways we wouldn’t have predicted (or desired), asserts Tegmark. No matter how specifically we define an AI’s goal, there’s likely to be some ambiguity in how it chooses to interpret and accomplish that goal. This makes its behavior largely unpredictable. For example, if we gave an artificial superintelligence the goal of enacting world peace, it could do so by trapping all humans in separate cages.
The Possible Extent of AI Power
Why would an artificial superintelligence’s goal have such a dramatic impact on humanity? An artificial superintelligence would use all the power at its disposal to accomplish its goal. This is dangerous because such a superintelligence could theoretically gain an unimaginable amount of power—enough to completely transform our society with a negligible amount of effort.
According to Tegmark, although an artificial superintelligence is a digital program, it could easily exert power in the real world. For instance, it could make money selling digital goods such as software applications, then use those funds to bribe humans into unknowingly working for it (perhaps posing as a human hiring manager on digital job listing platforms). An AI controlling a Fortune 500-sized human task force could do almost anything—including creating robots that the AI could control directly.
Tegmark asserts that, in theory, an artificial superintelligence could eventually attain godlike power over the universe. By using its intelligence to create increasingly advanced technology, an AI could eventually create machines able to rearrange the fundamental particles of matter—turning anything into anything else—as well as generate nearly unlimited energy to power those machines.
AI’s Power in the Digital World While Tegmark focuses primarily on the ways AI could influence the physical world, Yuval Noah Harari emphasizes the danger posed by AI’s influence solely in the digital world. For instance, AI-controlled social media accounts could earn the trust of human users, distort their view of the world, and influence their behavior for political or economic ends. Additionally, Harari contends that this kind of AI will threaten to unravel our society long before we develop superintelligent AI—in fact, he asserts that we should be aware of this potential danger today. People already regularly consult online resources to dictate their decisions. For instance, they use product reviews to determine what to buy, and they conduct online research to determine who to vote for. AI (or people controlling AI) therefore wouldn’t need to pay workers or manufacture reality-bending technology to totally reshape human society. Transforming the digital landscape we consult every day would be enough. There’s evidence that this kind of distortion of the digital world has already begun—for instance, social media bots were used to discredit 2017 French presidential candidate Emmanual Macron by amplifying the spread of his leaked emails across social media platforms. |
———End of Preview———
Like what you just read? Read the rest of the world's best book summary and analysis of Max Tegmark's "Life 3.0" at Shortform.
Here's what you'll find in our full Life 3.0 summary:
- That an artificial intelligence evolution will make us the simple lifeforms
- The evidence that artificial superintelligence might soon exist
- Whether or not we should be alarmed about the emergence of AI