The Negative Impact of Artificial Intelligence on Society

What’s the negative impact of artificial intelligence on society? How much will a superintelligent AI change the world?

According to Superintelligence by Nick Bostrom, sooner or later, a superintelligent AI will be created. Depending on the superintelligent AI’s behavior, these changes could be very detrimental to humanity.

Let’s look at a more in-depth explanation of how a superintelligent AI will affect society.

The Consequences of Superintelligent AI

If an AI has some measure of general intelligence and the ability to modify its own programming, its intelligence would likely increase at an ever-accelerating rate. This implies that an AI might rise from sub-human to superhuman intelligence very quickly, which is probably the biggest negative impact of artificial intelligence on society.

(Shortform note: Although we’ve not yet witnessed this type of growth in artificial intelligence, there are other applications that demonstrate how self-accelerating growth can cause rapid transformations. One is the “coulombic explosion” reaction between water and alkali metals, where the chemical reaction causes the surface area of the metal to increase and the reaction speed is proportional to the surface area. When this condition occurs, the reaction rate increases so quickly that the metal appears to explode.)

Moreover, as Bostrom points out, superior intelligence is what has allowed humans to dominate the other life forms on planet Earth. Thus, it stands to reason that once a superintelligent AI exists, the fate of humanity will suddenly depend more on what the superintelligent AI does than on what humans do—just as the existence of most animal species depends more on what humans do (either to take care of domestic animals or to preserve or destroy habitat that sustains wild animals) than on what the animals themselves do.

Superior Intelligence or Superior Communication?

There are differences of opinion about exactly what elevated humans above other animals. In Homo Deus, Yuval Noah Harari contends that it wasn’t humans’ greater intelligence, per se, but rather their greater capacity for communication and coordinated work. In fact, he argues that human intelligence really isn’t that different from, or very far above, the intelligence of other animals. 

But if Harari is correct, this perspective would actually strengthen Bostrom’s conclusions about the rise of AI. This is because computers already have greater communication and coordination capabilities than humans—after all, that’s one of the main things humans use computers for. And if the intelligence gap between humans and animals is small, then an AI with even slightly superhuman general intelligence (and a much greater capacity for communication) might be in a position to bring about sweeping changes, just as humans did for other animals.

The Abilities of Superintelligent AI

But how would a superintelligent AI actually gain or wield power over the earth if it exists only as a computer program? Bostrom lists some abilities that an AI would have as soon as it became superintelligent.

  • It would be capable of strategic thinking. Consequently, it could develop plans to achieve long-term objectives and would take into account any applicable opposition.
  • It could manipulate and persuade. It could figure out how to get humans to do what it wanted them to, much like a human might train a dog to play fetch. Humans might not even realize the superintelligent AI was trying to manipulate them.
  • It would be a superlative hacker. It could gain access to virtually all networked technology without needing anyone’s permission.
  • It would be good at engineering and development. If it needed new technology or other devices that didn’t exist yet in order to achieve its objectives, it could design them.
  • It would be capable of business thinking. It could figure out ways to generate income and amass financial resources.

The Destructiveness of Superintelligent AI

Clearly, a superintelligent AI with the capabilities listed above would be a powerful entity. But why should we expect it to use its power to the detriment of humankind? Wouldn’t a superintelligent AI be smart enough to use its power responsibly? 

According to Bostrom, not necessarily. He explains that intelligence is the ability to figure out how to achieve your objectives. By contrast, wisdom is the ability to discern between good and bad objectives. Wisdom and intelligence are independent of each other: You can be good at figuring out how to get things done (high intelligence) and yet have poor judgment (low wisdom) about what is important to get done or even ethically appropriate. 

What objectives would a superintelligent AI want to pursue? According to Bostrom, this is impossible to predict with certainty. However, he points out that existing AIs tend to have relatively narrow and simplistic objectives. If an AI started out with narrowly defined objectives and then became superintelligent without modifying its objectives, the results could be disastrous: Since power can be used to pursue almost any objective more effectively, such an AI might use up all the world’s resources to pursue its objectives, disregarding all other concerns.

For example, a stock-trading AI might be programmed to maximize the long-term expected value (measured in dollars) of the portfolio that it manages. If this AI became superintelligent, it might find a way to trigger hyperinflation, because devaluing the dollar by a large factor would radically increase the dollar value of its portfolio. It would probably also find a way to lock out the original owners of the portfolio it was managing, to prevent them from withdrawing any money and thereby reducing the value of the account. 

Moreover, it might pursue an agenda of world domination just because more power would put it in a better position to increase the value of its portfolio—whether by influencing markets, commandeering assets to add to its portfolio, or other means. It would have no regard for human wellbeing, except insofar as human wellbeing affected the value of its portfolio. And since human influences on stock prices can be fickle, it might even take action to remove all humans from the market so as to reduce the uncertainty in its value projections. Eventually, it would amass all the world’s wealth into its portfolio, leaving humans impoverished and perhaps even starving humanity into extinction.

The Negative Impact of Artificial Intelligence on Society

———End of Preview———

Like what you just read? Read the rest of the world's best book summary and analysis of Nick Bostrom's "Superintelligence" at Shortform.

Here's what you'll find in our full Superintelligence summary:

  • How an AI superintelligence would make humans the inferior species
  • Why AI can't be expected to act responsibly and ethically
  • How to make sure a superintelligent AI doesn’t destroy humankind

Katie Doll

Somehow, Katie was able to pull off her childhood dream of creating a career around books after graduating with a degree in English and a concentration in Creative Writing. Her preferred genre of books has changed drastically over the years, from fantasy/dystopian young-adult to moving novels and non-fiction books on the human experience. Katie especially enjoys reading and writing about all things television, good and bad.

Leave a Reply

Your email address will not be published. Required fields are marked *