Can AI Be Smarter Than Humans? Why There’s a Possibility

What’s superintelligence? Can AI be smarter than humans?

In Superintelligence, Nick Bostrom defines “superintelligence” as general intelligence that’s significantly greater than human-level intelligence. He theorizes that the possibility of AI reaching an intelligence beyond the human level is high.

Let’s look at Bostrom’s argument for why AI can reach the “superintelligence” level.

The Feasibility of Superintelligent AI

“General intelligence” refers to intellectual abilities that span the whole range of human capabilities, such as learning, interpreting raw data to draw useful inferences, making decisions, recognizing risks or uncertainties, and allowing for uncertainties when making decisions. He notes that while some computers already surpass humans in certain narrow areas, such as playing a certain game or crunching numbers, no AI has yet come close to human-level general intelligence. 

But can AI be smarter than humans? Bostrom argues that the answer is, most likely, yes. As he explains, silicon computers have a number of advantages over human brains. For one thing, they operate much faster. Neural signals travel about 120 meters per second and neurons can cycle at a maximum frequency of about 200 Hertz. By contrast, electronic signals travel at the speed of light (300,000,000 meters per second) and electronic processors often cycle at 2 billion Hertz or more. In addition, computers can copy and share data and software directly, while humans have to learn gradually.

Tools for Humans vs. Replacements for Humans

Peter Thiel would probably argue that the computer advantages Bostrom lists are only advantageous in certain applications. In Zero to One (published the same year as Superintelligence), Thiel contends that humans and computers excel at such different things that we shouldn’t worry about computers replacing human workers.

He writes that while computers outperform humans at certain tasks, there are many other tasks that humans do effortlessly while the best AI algorithms find them practically impossible. While Thiel concedes that superintelligent AI might one day be developed, he argues that it’s too far away to concern ourselves with in the 21st century. Instead, we should focus on building AI tools that merely complement human abilities.

However, in 21 Lessons for the 21st Century, Yuval Noah Harari argues that AI will achieve human-level or greater intelligence in the 21st century. To Bostrom’s arguments, he adds that recent advances in information science and neurology have shown that algorithms can demonstrate many capabilities once thought to be uniquely human, such as intuition and creativity. That said, Harari doesn’t seem to envision AI becoming superintelligent (that is, so intelligent that it’s beyond humans’ ability to control it), as Bostrom does.

Different Routes to Superintelligent AI

As Bostrom explains, there are a number of different ways that superintelligent AI could be achieved. Thus, even if some of them don’t end up working, at least one of them probably will.

Intelligent Design

One route to superintelligent AI that Bostrom discusses is human programmers developing a “seed AI” that has some level of general intelligence—perhaps similar to human intelligence, or maybe a little below that mark. Then they use the AI to continue improving the program. As the AI gets smarter, it improves itself more quickly. Because of this self-reinforcing cycle, it might progress from sub-human to superhuman intelligence rather quickly.

(Shortform note: There’s been significant progress along this route since Bostrom wrote Superintelligence in 2014. For example, more powerful computers have made it possible to create Large Language Models (LLMs) that can read and write in both natural human languages and computer code. The ability of LLMs to use normal human language represents a milestone in general AI development, and their ability to write computer code represents a key building block of self-improving AIs like the ones Bostrom describes.)

Simulated Evolution

Another route that Bostrom discusses is “simulated evolution.” In the context of software engineering, this means programming a computer to generate random variations of a program, test their functionality against specified criteria, and continue to iterate on the best ones. Theoretically, simulated evolution can provide novel solutions to programming problems without the need for new insight on the part of human programmers. Thus, even if human programmers can’t figure out how to create a superintelligent AI or a self-improving seed AI directly, they might be able to create one using simulated evolution.

Update on Simulated Evolution

Progress along this route since 2014 has been slower. Today, most of the major works on the subject still predate Bostrom’s book. However, there has been some recent interest in hybrid algorithms that use simulated evolution to enhance more conventional machine-learning algorithms. The current trend to focus on conventional AI design over simulated evolution makes sense.

As Bostrom observes, simulated evolution might provide a way to create superintelligent AI if human developers get stuck on the problem. However, right now, AI development is progressing rapidly, so programmers don’t need a fallback plan. But if progress on the problem of general AI stalls in the future, simulated evolution might gain more traction as a way to begin progressing again.

Brain Simulations

Yet another route to which Bostrom devotes considerable attention is “whole brain emulation.” The human brain is obviously capable of human-level general intelligence. Thus, if you could map out exactly how all the neurons in a human brain are connected and create a computer program to accurately simulate all those connections, you would have a program capable of human-level intelligence. And if the computer program could operate faster than the original brain, it would have superhuman intelligence.

Bostrom explains that creating a simulated human brain requires a basic understanding of how neurons interact with each other and a detailed cellular-level 3D scan of a human brain. However, it doesn’t require an understanding of how the brain’s structures give rise to intelligence—assuming the simulation captures the placement of neurons accurately, it should, theoretically, mimic the brain’s function even if its developers don’t know exactly how or why. Thus, the main obstacle to implementing this method is scanning a human brain precisely enough. 

The Human Connectome Project

The Human Connectome Project has made progress on scanning the human brain and making data on its structure available to the public. Sponsored by the National Institutes of Health, the purpose of the project is to develop a detailed structural and functional mapping of the human brain that can help medical practitioners diagnose neurological disorders and develop treatments for them. The project uses several types of MRI techniques to scan participants’ brains. One ongoing challenge is combining the data from different scans accurately. Another challenge arises from the discovery that individual human brains are remarkably different—even for people who have the same genetic code, like identical twins. Despite these challenges, the project has been able to publish brain mapping data from 1100 healthy adults.

So far, there have been no widely publicized attempts to develop brain simulation algorithms based on data from the Human Connectome Project. Nevertheless, the project provides a window into recent progress and the current state of the art in the kind of brain scanning that would be required for brain-simulation AI.

Spontaneous Generation

Finally, Bostrom points out that it might be possible to create a superintelligent AI inadvertently. Scientists don’t know exactly what the minimum set of components or capabilities for general intelligence is. There’s already a lot of software that performs specific information processing operations and has the ability to send and receive data over the internet. Hypothetically, a programmer could create a piece of software that, by itself, isn’t even considered AI, but it happens to be the final component of general intelligence. This would then allow a superintelligent AI to arise spontaneously on the internet as the new software begins to communicate with all the other software that’s already running.

Other Technologies Built by Accident 

If Bostrom’s suggestion that superintelligent AI could arise by accident seems far-fetched, it’s worth considering other technologies that were discovered or created by accident, such as the microwave oven, the first antibiotics, safety matches, and the discovery of radioactivity

But perhaps the best illustration is the 1.7-billion-year-old nuclear reactor discovered in a western African mine called Oklo. The reactor was created when, almost two billion years ago, floods dissolved uranium from a mudflat and swept it into underground pools, where it was absorbed by algae. When the algae died, the uranium they’d concentrated in their cells piled up, eventually becoming a sizable deposit. Subsequent flooding provided the water needed to allow for a sustained nuclear reaction as the deposit decayed, and as a result, the mine produced a naturally occurring nuclear reactor that ran for approximately 150,000 years before its uranium deposits ran out. (It’s no longer active, but has left behind evidence of these reactions.)

While a nuclear power plant likely has fewer essential components than artificial intelligence does, the natural nuclear reactors near Oklo illustrate how spontaneous interaction between seemingly unrelated components can suddenly give rise to new behavior when the components are brought together.
Can AI Be Smarter Than Humans? Why There’s a Possibility

———End of Preview———

Like what you just read? Read the rest of the world's best book summary and analysis of Nick Bostrom's "Superintelligence" at Shortform.

Here's what you'll find in our full Superintelligence summary:

  • How an AI superintelligence would make humans the inferior species
  • Why AI can't be expected to act responsibly and ethically
  • How to make sure a superintelligent AI doesn’t destroy humankind

Katie Doll

Somehow, Katie was able to pull off her childhood dream of creating a career around books after graduating with a degree in English and a concentration in Creative Writing. Her preferred genre of books has changed drastically over the years, from fantasy/dystopian young-adult to moving novels and non-fiction books on the human experience. Katie especially enjoys reading and writing about all things television, good and bad.

Leave a Reply

Your email address will not be published. Required fields are marked *