Shadowy silhouettes of humans behind a sophisticated and ominous AI robot raises the question, "Why is AI scary?"

Why is AI scary? Should it be? Do you know how it might impact the world and your life?

In Scary Smart, author Mo Gawdat explores the potential outcomes of superintelligent AI. He discusses three main problems we may face: misuse by bad actors, unintended consequences from mistakes, and a shift in how we value human contributions.

Read on to discover Gawdat’s insights and learn why AI’s rapid advancement is both exciting and concerning.

Why AI Is “Scary Smart”

Why is AI scary? Gawdat predicts two potential outcomes of building AI that surpasses us in intelligence: Superintelligent AI can either help us build a utopia where we’ve found solutions for our world’s biggest problems—poverty, hunger, war, and crime—or shape our world into a dystopia. Sci-fi writers have long imagined bleak futures where AI tries to subjugate or exterminate humans. But Gawdat predicts that things will go wrong in slightly less dramatic but potentially more insidious ways in the near future—no killer robots needed. We’ll look at three problems Gawdat thinks are unavoidable. 

#1: People With Bad Intentions Will Task AI With Projects That Hurt Others

The first problem we’ll run into with superintelligent AI might also be the most predictable. Gawdat contends that as AI becomes more advanced, people with selfish intentions will use AI to make money and gain power. They’ll put it to work to sell products, control markets, commit acts of cyberterrorism, spread fake content and disinformation, influence public opinion, manipulate political systems, invade others’ privacy, hack government data, and build weapons. 

Gawdat explains that, at least for a time, AI systems will follow their developers’ agendas. That means that AI systems will compete against each other to get the people behind them as much wealth and power as possible, bounded only by the ethics of the people (and corporations) who develop them.

(Shortform note: Geoffrey Hinton, an AI pioneer who quit his job at Google to speak about the risks of AI, agrees with Gawdat that it’s difficult to prevent people with self-serving intentions from developing AI and using it for nefarious purposes. Hinton explains that tech giants like Google and Microsoft have entered into a competition that we likely can’t stop because the stakes are too high. Not everyone believes the competition between AI models will have dire consequences, though: Rationality author Steven Pinker points out that all intelligent organisms are competitive, and he contends that machines will be “what we allow them to be.” Pinker thinks AI systems won’t hurt people unless we program them to—which Gawdat might say is the real threat.)

#2: Mistakes and Misunderstandings Will Have Unintended Consequences

The second problem that plagues AI’s progress might also sound unsurprising. Even the most straightforward computer program will have mistakes in its code, and AI is no exception. But Gawdat explains that even simple mistakes can have significant consequences when we put AI in charge of decisions that affect the stock market, the food supply, or the healthcare system

Instructions get lost in translation between humans and machines because it’s difficult to put our intentions and complex logic into the language that a computer can understand. (That means that there’s often a difference between what we tell a machine to do and what we actually mean, and it’s difficult to overcome this communication problem.) This will become even harder when we present AI with increasingly complicated tasks. 

(Shortform note: To see why it’s hard to tell a computer what we want it to do, consider Boolean logic. To write instructions, you reduce them to an expression that uses the operators “and,” “or,” and “not,” and gives a result of “true” or “false”—an orderly but unfamiliar way of thinking through a task. Imagine the infamous “paperclip problem,” where you task a superintelligent AI with maximizing paperclip production. Using Boolean logic, you might say: “IF (‘resources’ exist AND ‘production_capacity’ is ‘available’), THEN ‘make_paperclips’ is TRUE.” That means that when these conditions are fulfilled, the AI should make more paperclips—logic that philosopher Nick Bostrom contends could lead AI to destroy humanity to make more paperclips.)

Gawdat also notes that when we develop an AI system to help us with a specific task, the AI will regard that task as its life purpose or as a problem to be solved no matter what. As Gawdat explains, every solution comes with tradeoffs, and AI may settle on a solution that comes with tradeoffs we consider unacceptable. But the system will be so single-mindedly focused on fulfilling its purpose, no matter how it needs to do that, that it will be challenging to ensure it doesn’t compromise people’s safety or well-being in the process. 

(Shortform note: Computer scientist Stuart Russell offers a hypothetical example of the unintended consequences of letting AI single-mindedly pursue a goal without any checks on the costs. In a thought experiment, Russell imagines what might happen if we tasked an AI model with reducing the acidification of the oceans. The model might decide to use a chemical reaction between the oceans and the atmosphere to lower the pH of the water. But it might choose a solution that would remove most of the oxygen from the atmosphere if it doesn’t realize that it needs to keep the earth liveable for humans, too. This illustrates what experts call the “King Midas” problem: AI might give us exactly what we ask for, even when that’s not what we want.)

#3: AI Will Change How We Understand Our Value as Humans

The third problem is a more philosophical issue: Gawdat warns that for most of us, our contributions as humans will be of limited value. While many people fear losing their jobs to AI, Gawdat writes that AI won’t necessarily replace humans at work—at least not those who become adept at working with AI. But he also predicts that AI will cheapen the value of what we contribute, reducing the intellectual value of the knowledge we produce and the creative value of the art we make

(Shortform note: Not everyone agrees with Gawdat that AI will diminish the value of human contributions. Some contend that the most profound thing that AI will do is show us its limits: It will demonstrate what machines can’t do and therefore hold up a mirror to show us what is unique about who we are and what we do in the world. But even if it turns out that AI can do everything that we can, we still have models for preserving our sense of value and dignity as humans. For instance, in Adam, Henri J.M. Nouwen argues that our value lies in our being, not in our doing. He contends that humans are valuable because we exist in community and vulnerability with others—an aspect of the human experience that AI is unlikely to change.)

Additionally, Gawdat anticipates that AI will magnify the disparities between the people whose contributions are seen to matter and those whose contributions are not valued. AI will learn this by observing how our capitalist society currently treats people differently, and then it will act in ways that entrench that inequality.

(Shortform note: Many experts worry that AI is already learning our biases and prejudices—for example, it misidentifies people of color with facial recognition AI—and might threaten social justice by reproducing the social inequalities it observes. For instance, AI’s “algorithmic bias” could pose a disproportionate risk to women, undermine LGBTQ identity, and put transgender people at particular risk. These injustices might be even worse if AI leaves us feeling disconnected from each other, as happened in the 2008 film WALL-E, where humans have left Earth and live on a spaceship piloted by robots. The film seems to warn that by putting AI in charge, we could lose focus on what’s happening in our world, fair or not—and miss out on the emotional connections and unique perspectives that make us all valuable.)

Why Is AI Scary? 3 Reasons to Sound the Alarm (Mo Gawdat)

Elizabeth Whitworth

Elizabeth has a lifelong love of books. She devours nonfiction, especially in the areas of history, theology, and philosophy. A switch to audiobooks has kindled her enjoyment of well-narrated fiction, particularly Victorian and early 20th-century works. She appreciates idea-driven books—and a classic murder mystery now and then. Elizabeth has a blog and is writing a book about the beginning and the end of suffering.

Leave a Reply

Your email address will not be published. Required fields are marked *