This is a preview of the Shortform book summary of Superintelligence by Nick Bostrom.
Read Full Summary

1-Page Summary1-Page Book Summary of Superintelligence

Oxford philosopher Nick Bostrom wrote Superintelligence in 2014 to raise awareness about the possibility of AI suddenly exceeding human capabilities, spark discussion about the risks inherent in this scenario, and foster collaboration in managing those risks.

Today, the possibility of AI rivaling or even vastly exceeding human...

Want to learn the ideas in Superintelligence better than ever?

Unlock the full book summary of Superintelligence by signing up for Shortform .

Shortform summaries help you learn 10x better by:

  • Being 100% clear and logical: you learn complicated ideas, explained simply
  • Adding original insights and analysis,expanding on the book
  • Interactive exercises: apply the book's ideas to your own life with our educators' guidance.

READ FULL SUMMARY OF SUPERINTELLIGENCE

Here's a preview of the rest of Shortform's Superintelligence summary:

Superintelligence Summary The Feasibility of Superintelligent AI

Bostrom defines “superintelligence” as general intelligence that’s significantly greater than human-level intelligence. As he explains, “general intelligence” refers to intellectual abilities that span the whole range of human capabilities, such as learning, interpreting raw data to draw useful inferences, making decisions, recognizing risks or uncertainties, and allowing for uncertainties when making decisions. He notes that while some computers already surpass humans in certain narrow areas, such as playing a certain game or crunching numbers, no AI has yet come close to human-level general intelligence.

But could an artificial, nonhuman entity ever have superintelligence? Bostrom argues that the answer is, most likely, yes. As he explains, silicon computers have a number of advantages over human brains. For one thing, they operate much faster. Neural signals travel about 120 meters per second and neurons can cycle at a maximum frequency of about 200 Hertz. By contrast, electronic signals travel at the speed of light (300,000,000 meters per second) and electronic processors often cycle at 2 billion Hertz or more. In addition, computers can copy and share data and software...

Try Shortform for free

Read full summary of Superintelligence

Sign up for free

Superintelligence Summary The Consequences of Superintelligent AI

So, sooner or later, a superintelligent AI will be created. Why should that concern you any more than the fact that mechanical vehicles can go faster than a human can run? According to Bostrom, the rise of a superintelligent AI could cause dramatic changes in how the world works—changes that would take place very quickly. And depending on the superintelligent AI’s behavior, these changes could be very detrimental to humanity.

As we mentioned earlier, if an AI has some measure of general intelligence and the ability to modify its own programming, its intelligence would likely increase at an ever-accelerating rate. This implies that an AI might rise from sub-human to superhuman intelligence very quickly.

(Shortform note: Although we’ve not yet witnessed this type of growth in artificial intelligence, there are other applications that demonstrate how self-accelerating growth can cause rapid transformations. One is the “coulombic explosion” reaction between water and alkali metals, where the chemical reaction causes the surface area of the metal to increase and the reaction speed is proportional to the surface area. When this condition...

What Our Readers Say

This is the best summary of How to Win Friends and Influence PeopleI've ever read. The way you explained the ideas and connected them to other books was amazing.
Learn more about our summaries →

Superintelligence Summary How to Manage the Rise of Superhuman Intelligence

What can we do to make sure a superintelligent AI doesn’t destroy humankind or relegate humans to miserable living conditions?

In principle, one option would be never to develop general AI in the first place. However, Bostrom doesn’t recommend this option. In practice, even if AI research was illegal, someone would probably do it anyway. And even if they didn’t, as we discussed earlier, it could still happen accidentally.

But more importantly, Bostrom points out that a superintelligent AI could also be very good for humanity if it helped us instead of wiping us out. The superintelligent AI might be able to develop solutions to problems that humans have thus far been unable to solve, like reining in climate change, colonizing outer space, and bringing about world peace. Thus, rather than opposing AI research, Bostrom advocates a three-pronged approach to making sure it’s beneficial: Impose limits on the superintelligent AI, give it good objectives, and manage the development schedule to make sure the right measures are in place before AI achieves superintelligence. We’ll discuss each of these in turn.

(Shortform note: Bostrom’s plan to use AI to solve humanity’s problems could...

Try Shortform for free

Read full summary of Superintelligence

Sign up for free

Shortform Exercise: What Would You Do With a Superintelligent AI?

Imagine that tomorrow morning scientists announce they have created the first sentient, superintelligent AI. The developers assert that the AI has a strong code of ethics, so it won’t do anything harmful or help anyone else do anything that would harm others. You’ve been invited on the first public tour of the facility that houses the superintelligent AI.


The tour guide invites each member of the tour to take a turn at the AI’s input prompt. You can ask it a question or request any kind of digital output (text file, graphics, computer code, etc.) What’s the first thing you would ask it?

Why people love using Shortform

"I LOVE Shortform as these are the BEST summaries I’ve ever seen...and I’ve looked at lots of similar sites. The 1-page summary and then the longer, complete version are so useful. I read Shortform nearly every day."
Jerry McPhee
Sign up for free