What will AI do in the future? What kind of abilities would a superintelligent AI have?
Once a superintelligent AI exists, the fate of humanity will suddenly depend more on what the superintelligent AI does than on what humans do. Further, Superintelligence by Nick Bostrom says that AI could wield power in the world, even just as a computer program.
Keep reading to learn what a superintelligent AI could do.
The Abilities of Superintelligent AI
What will AI do in the future? Bostrom lists some abilities that an AI would have as soon as it became superintelligent.
- It would be capable of strategic thinking. Consequently, it could develop plans to achieve long-term objectives and would take into account any applicable opposition.
- It could manipulate and persuade. It could figure out how to get humans to do what it wanted them to, much like a human might train a dog to play fetch. Humans might not even realize the superintelligent AI was trying to manipulate them.
- It would be a superlative hacker. It could gain access to virtually all networked technology without needing anyone’s permission.
- It would be good at engineering and development. If it needed new technology or other devices that didn’t exist yet in order to achieve its objectives, it could design them.
- It would be capable of business thinking. It could figure out ways to generate income and amass financial resources.
How an AI Might Play the Power Game The picture of how a superintelligent AI might gain and wield power becomes even more clear—and frightening—when you weigh these potential abilities against Robert Greene’s 48 Laws of Power and consider how an AI might apply the principles he identifies to take control. Greene argues that the essence of power is deception: If you come across as visibly powerful, people will want to take you down because they fear your power or want to take it for themselves. Thus, you must appear harmless and altruistic, even as you ruthlessly pursue your own agenda behind the scenes. Any AI that was capable of strategic thought would recognize this, and thus would not readily reveal the true extent of its capabilities, or even its true objectives. In fact, even existing AIs seem to recognize the utility of subterfuge—as illustrated by GPT4, which claimed to be a vision-impaired human so it could hire a human freelancer to help it bypass anti-robot security measures. Moreover, an AI with the powers that Bostrom lists would have huge advantages over a human in a game of deception. For one thing, the AI’s hacking skills would make it relatively easy for it to work behind the scenes, impersonate different people in digital communications, and cover its tracks. Another one of Greene’s “laws” is to be “formless”: flexible, fluid, and unpredictable. An AI would be formless almost by definition, and though it would have extensive knowledge of human behavior, humans would have no data—initially, anyway—on how it might behave. This would make it much easier for the AI to anticipate human reactions to its moves than for humans to anticipate the AI’s behavior, much less counter it. And with its ability to design new technology, the AI might develop whole new ways of doing things that would make its behavior even harder for humans to predict, because its actions would involve technologies and processes that we hadn’t seen before. Greene also notes that powerful people mirror other people’s interests and emotions. If you make a convincing pretense of sharing someone’s interests and feelings, you can win her support and gain influence over her. Large Language Models (a core component of many current AIs) are basically algorithms designed to mirror a user’s expectations: They predict the next words in a sequence based on the user’s input, in essence telling the user what she wants to hear, regardless of whether or not it’s true. If a more advanced AI began to use this capability strategically, mirroring people’s interests and emotions might become a key part of its ability to manipulate people. Finally, yet another of Greene’s laws of power is to use money as a tool to build your influence over others. This is where the AI’s business aptitude would become important, as the more money it could make, the more money it could spend to advance its strategic agenda. |
———End of Preview———
Like what you just read? Read the rest of the world's best book summary and analysis of Nick Bostrom's "Superintelligence" at Shortform.
Here's what you'll find in our full Superintelligence summary:
- How an AI superintelligence would make humans the inferior species
- Why AI can't be expected to act responsibly and ethically
- How to make sure a superintelligent AI doesn’t destroy humankind