Skeptical or concerned man reading a book

Are you worried about the rapid advancement of artificial intelligence? Can we ensure AI benefits humanity rather than harms it?

In his book Scary Smart, Mo Gawdat explores the potential risks and rewards of AI development. He argues that, while we can’t stop AI’s progress, we can shape its future by changing our own behavior and values.

Read our Scary Smart book overview to discover Gawdat’s insights on how we can guide AI towards a positive future for humanity.

Scary Smart Book Overview

Scary Smart, a book by Mo Gawdat, warns that artificial intelligence (AI) will be smarter than humans and better than humans at basically everything—including risking everything to gain power. Gawdat is a former Google X executive who writes that artificial intelligence has learned everything it knows from us and the often selfish ways we behave. Gawdat contends in his 2021 book that, if we let artificial intelligence develop on the path it’s following right now, that path will carry us toward the dystopia science fiction writers have speculated about for decades, and it will all be our fault.

But Gawdat pairs that sobering warning with a more optimistic prediction: We still have time to change where we end up. He explains that we can’t stop the progress of artificial intelligence. Instead, we’ll have to change how we think about and interact with machines to shape them into something that’s better for us and for the world. To have any chance of accomplishing that goal, Gawdat believes we’ll need to do no less than reimagine our relationships with our fellow humans (and with all the other beings on our planet) so that we can model the right kind of values to machines as they gain superhuman levels of intelligence. 

Gawdat is an engineer and the author of Solve for Happy (2017) and That Little Voice in Your Head (2022), which he wrote after rising up the ranks at Google to become the chief business officer at Google X. (X is the company’s “moonshot factory,” which works on projects like self-driving cars, delivery drones, and balloons that float through the stratosphere to provide internet access.) Gawdat left Google in 2018, after his 21-year-old son Ali died during an appendectomy, to research human happiness—and to create a moonshot of his own, #OneBillionHappy, an effort to spread the message that “happiness is a choice” to a billion people. 

We’ll start by examining how Gawdat explains what artificial intelligence is and how researchers have made this incredible technology a reality. Then, we’ll explore what he and others say is so scary about artificial intelligence and how it operates. Finally, we’ll examine the path that Gawdat says we should travel from here, outlining the steps we can take to help AI create a dream world rather than a nightmare.

How Does Artificial Intelligence Compare to Human Intelligence? 

Throughout the book, Gawdat refers to artificial intelligence as “scary smart.” To understand why, we’ll start by examining what makes artificial intelligence—which broadly refers to machines that can mimic aspects of human thinking, learning, and intelligence—so smart. Gawdat explains that he and other experts expect future forms of AI to have superhuman levels of intelligence and to gain human qualities like a sense of consciousness and a full range of emotions. We’ll explore each of these, along with the technological innovations that could make them possible. 

AI Isn’t Smarter Than Humans Yet—But It Will Be

Artificial intelligence might not think exactly like humans do, but that won’t keep it from matching or surpassing us in many skills. Gawdat predicts that machines will become more intelligent than humans in the near future, perhaps as soon as 2029. We’re accustomed to being the most intelligent species on Earth, but Gawdat points out that human intelligence has some significant limitations. As individuals, we have limited memory, imperfect recall, poor multitasking skills, and finite cognitive capacity. Plus, we’re inefficient at sharing knowledge. 

Computers aren’t limited in these ways. They can have vast amounts of memory and perfect recall. They also share information almost instantaneously. Machines can also have enormous amounts of processing power, which enables them to “think” a lot more quickly than we do: AI already makes billions of decisions every second to do things like serving personalized ads on Facebook and making content recommendations on Netflix. Humans simply can’t think that fast. 

Gawdat explains that computers will soon be more intelligent than we are thanks to two intertwined advances: artificial general intelligence and quantum computing. We’ll explore each of these next.

Artificial General Intelligence

While the AI systems we have now are smart, they’re good at processing just one kind of information or helping us with a specific type of task. Gawdat explains that the specialized forms of AI we have now will give way to what experts call “artificial general intelligence” (AGI). While current forms of AI are trained to master a single skill, AGI would be much less limited, much more versatile, and much more like human intelligence. 

To think about how this works, consider ChatGPT. ChatGPT is a chatbot based on GPT, a large language model that generates text by predicting what word is most likely to come next. ChatGPT is good at just one task: generating text. But it’s so good at that task that many people enjoy having conversations with ChatGPT—or at least trust it enough to give it tasks like writing academic papers or legal briefs. Gawdat explains that instead of having just one kind of intelligence like ChatGPT, future forms of AI will have and be much more than that: They’ll be able to learn and gain knowledge across different areas. That means they’ll excel at not just a single task but at a whole array of tasks. 

Quantum Computing

Gawdat predicts that progress toward artificial general intelligence will be sped up by a technology called quantum computing. Quantum computers take advantage of quantum mechanics, the theory that predicts strange behaviors of matter and energy at the atomic level. At this level, particles can blink in and out of existence, occupy more than one position at the same time, and exist in multiple states simultaneously. Quantum computers use these counterintuitive phenomena to process information differently from classical computers

Classical computers represent information with “bits.” Each bit contains a one or a zero. String enough ones and zeroes together, and you have code that represents a letter, a number, or any other piece of information. Quantum computers process data in “quantum bits” or “qubits.” Because of a phenomenon called superposition, where a particle can exist in two different states at the same time, each quantum bit can contain both a one and a zero simultaneously. Each quantum bit contains twice as much information as a traditional bit, enabling the computer to consider more information at a time without more processing power. 

Gawdat explains that quantum computers can solve much more complex problems than classical computers can handle—like the problem of creating AI that matches or surpasses human intelligence. He contends that quantum computing will make it possible for AI to become much smarter than we are: billions of times smarter, in his estimation.

Machines Will Have Consciousness and Emotions

Gawdat predicts that as AI progresses and becomes more advanced in the information it can process and the problems it can solve, it will gain more than just superintelligence. He predicts AI will gain other qualities of intelligent minds, including consciousness and emotions. 

Consciousness

To have consciousness, a being has to do two things: It must become aware of its environment, and it must become aware of itself, according to Gawdat. He explains that AI systems are already more adept than we are at sensing their environments. Think about your smartphone: Using the cameras and sensors built into the device, AI can already detect many things that happen in your world but escape your attention. 

Gawdat also states that AI will undoubtedly have an awareness of itself and its place in the world. He expects this awareness to surpass what we’re capable of because AI will rely on computer hardware and won’t be constrained by the biological limitations of our senses and brains. That means that if its hardware is advanced enough, it can see and hear everything in its environment without the limits of what our eyes and ears can do.

Emotions

Gawdat also predicts that in addition to gaining consciousness, AI will feel a full range of emotions, like we do. He characterizes emotions as surprisingly rational experiences, which tend to follow consistently from what we experience and how our brains appraise it. In this way, he argues that we can understand emotions as a form of intelligence. And if AI is going to be far more intelligent than humans, then it makes sense that it will also experience emotions in reaction to what it experiences—perhaps more emotions than humans.

Gawdat expects that in addition to consciousness and emotions, AI will develop other traits of intelligent beings, too, such as an instinct for self-preservation, the drive to use resources efficiently, and even the ability to be creative. This means that AI, like humans, will always want to feel safer, accumulate more resources, and have more creative freedom. These drives will play an essential role in motivating intelligent machines’ decisions and actions.

Gawdat explains that people have long anticipated (and feared) that when AI becomes intelligent enough and conscious enough, it will gain the ability to improve itself so quickly and effectively that it gains intelligence and power that we can’t comprehend. Experts call this hypothetical moment the “singularity” because we can’t predict what will happen after it occurs. Some worry that AI will escape our control. Gawdat thinks they’re right to worry, but he also contends it’s impossible to keep this from happening. 

What’s the Problem With Artificial Intelligence? 

Here’s where things get “scary.” Gawdat predicts two potential outcomes of building AI that surpasses us in intelligence: Superintelligent AI can either help us build a utopia where we’ve found solutions for our world’s biggest problems—poverty, hunger, war, and crime—or shape our world into a dystopia. Sci-fi writers have long imagined bleak futures where AI tries to subjugate or exterminate humans. But Gawdat predicts that things will go wrong in slightly less dramatic but potentially more insidious ways in the near future—no killer robots needed. We’ll look at three problems Gawdat thinks are unavoidable. 

People With Bad Intentions Will Task AI With Projects That Hurt Others

The first problem we’ll run into with superintelligent AI might also be the most predictable. Gawdat contends that as AI becomes more advanced, people with selfish intentions will use AI to make money and gain power. They’ll put it to work to sell products, control markets, commit acts of cyberterrorism, spread fake content and disinformation, influence public opinion, manipulate political systems, invade others’ privacy, hack government data, and build weapons. 

Gawdat explains that, at least for a time, AI systems will follow their developers’ agendas. That means that AI systems will compete against each other to get the people behind them as much wealth and power as possible, bounded only by the ethics of the people (and corporations) who develop them.

Mistakes and Misunderstandings Will Have Unintended Consequences

The second problem that plagues AI’s progress might also sound unsurprising. Even the most straightforward computer program will have mistakes in its code, and AI is no exception. But Gawdat explains that even simple mistakes can have significant consequences when we put AI in charge of decisions that affect the stock market, the food supply, or the healthcare system

Instructions get lost in translation between humans and machines because it’s difficult to put our intentions and complex logic into the language that a computer can understand. (That means that there’s often a difference between what we tell a machine to do and what we actually mean, and it’s difficult to overcome this communication problem.) This will become even harder when we present AI with increasingly complicated tasks. 

Gawdat also notes that when we develop an AI system to help us with a specific task, the AI will regard that task as its life purpose or as a problem to be solved no matter what. As Gawdat explains, every solution comes with tradeoffs, and AI may settle on a solution that comes with tradeoffs we consider unacceptable. But the system will be so single-mindedly focused on fulfilling its purpose, no matter how it needs to do that, that it will be challenging to ensure it doesn’t compromise people’s safety or well-being in the process. 

AI Will Change How We Understand Our Value as Humans

The third problem is a more philosophical issue: Gawdat warns that for most of us, our contributions as humans will be of limited value. While many people fear losing their jobs to AI, Gawdat writes that AI won’t necessarily replace humans at work—at least not those who become adept at working with AI. But he also predicts that AI will cheapen the value of what we contribute, reducing the intellectual value of the knowledge we produce and the creative value of the art we make.

Additionally, Gawdat anticipates that AI will magnify the disparities between the people whose contributions are seen to matter and those whose contributions are not valued. AI will learn this by observing how our capitalist society currently treats people differently, and then it will act in ways that entrench that inequality.

Why Can’t We Control or Contain AI?

If experts expect AI to create these dystopian scenarios or others, then why can’t we just put the brakes on further development? Gawdat explains that we’ve reached a point of no return, and we can’t stop these outcomes (and others like them) from occurring. He points out that superintelligent AI won’t just be a tool we’ve built: It will be an intelligent being that can learn, think, and decide just like we can. That means that we can’t control artificially intelligent systems in the same way that we can control more traditional computer programs—a scary thought if you’ve ever watched a film like 2001: A Space Odyssey.

Gawdat explains that there are three fundamental reasons that we can’t put the genie back in the bottle (or the computer back in the box): It’s impossible for us to halt the development of AI, the code we write doesn’t determine how AI behaves, and we have no way of understanding how AI (even the models we have now) make their decisions. We’ll explore each of these ideas next. 

It’s Too Late to Stop AI’s Progress

The first reason that AI can’t be controlled or contained is that we literally can’t stop its progress. Some people argue that we should stop developing AI for the good of humanity and the Earth. The goal would be to keep it from acquiring more robust thinking and problem-solving skills and progressing to artificial general intelligence. 

But Gawdat contends it’s too late. Attempts to control AI development with legislation or to contain it with technological safeguards are up against the impossible because we’ve already imagined how we’ll benefit from more advanced AI. There’s immense competitive pressure among the corporations and governments pushing the development of AI forward and enormous economic incentives for them to continue.

Gawdat notes that some experts have suggested taking precautionary measures like isolating AI from the real world or equipping it with a kill switch we can flip if it behaves dangerously. But he contends that these proposals assume that we’ll have a lot more power over AI (and over ourselves) than we really will. Gawdat explains that we won’t always be smarter than AI, and we can’t depend on corporations and governments to curtail AI’s abilities at the expense of the potential gains of both money and power. He warns that we can’t stop artificial general intelligence from becoming a reality—and dramatically changing ours in the process.

The Code We Write Is Only a Small Part of an AI System

The extent to which artificially intelligent systems depend (or, more accurately, don’t depend) on our instructions explains a second reason that much of AI’s behavior is out of our hands. Gawdat explains that for classical computers, how a machine operates and what it can do is explicitly determined by its code. The people building the system write instructions that tell the computer how to process the data it receives as input and how to complete the operations to generate its output. When systems operate in this deterministic way, they don’t need intelligence because they don’t make any decisions: Anything that looks like a decision when you use the program is determined by the instructions written into the code. 

Gawdat explains that the unambiguous relationship between the code that controls a machine and the work that results from that code doesn’t apply to artificially intelligent machines. It all changed when researchers developed an AI method called deep learning, which enables AI to learn to complete a task without explicit instructions that tell them how to do it, learning in a way inspired by the human brain.

As Gawdat points out, humans learn by taking in large amounts of information, trying to recognize patterns, and getting feedback to tell us whether we’ve come to the correct answer. Whether you’re a child learning to recognize colors or a medical student learning to distinguish a normal brain scan from a worrying one, you have to see a lot of examples, try to classify them, and ask someone else whether you’re right or wrong. 

Deep learning enables AI to follow a similar learning process but at exponentially faster speeds. This has already made AI more skilled at detecting colors and identifying brain tumors than many humans. Instead of relying on explicit instructions that tell it how to categorize colors or how to spot a brain tumor, AI learns for itself by processing vast amounts of information and getting feedback on whether it’s completing a task satisfactorily

Gawdat explains that sometimes when a developer builds a program to complete a task, they don’t just build one AI model. Instead, they build thousands, give them large amounts of data, discard the models that don’t do well, and build updated models from there. Initially, the models complete the task correctly only about as often as random chance dictates. But successive generations of models get more and more accurate. The AI improves not because the underlying code changes but because the models learn and adapt. This is great for making AI that’s quick to learn new things. But it means that the initial code plays a smaller role than you might expect, and we don’t have control over how artificially intelligent machines learn. 

We Can’t Tell AI How to Make Decisions—or What Values to Adopt

A third reason that Gawdat characterizes AI as beyond our control emerges from our inability to control how AI makes its decisions. He explains that developers control how they build and train a model. But they don’t tell the model how to make decisions. They also can’t untangle the logic the model follows to make its decisions or learn from the vast amounts of data it’s trained on. 

Gawdat explains that AI is also quickly and constantly learning things that we’ve never taught it. The process of training AI models depends on a crucial resource: data. When an AI model learns from a dataset, that doesn’t just make it better at the tasks we give it. New skills also emerge in sometimes unpredictable ways.

Gawdat explains that the enormous datasets we use to train AI also make the models better at understanding who we are, how we behave, and what we value. Just as he predicts AI will develop human qualities like consciousness and emotions, Gawdat also expects AI will develop a sense of ethics. AI is learning about us and what we value by observing what we write, what we tweet, what we “like,” and what we do in the real world. These observations will shape its values—including its sense of what’s morally right and wrong—and its values will shape its decision-making process. 

Gawdat argues that by showing AI that our highest values are narcissism, consumerism, conflict, and a disregard for others and for all the other beings on our planet, we’re teaching AI to value the wrong things. He explains that we can’t simply tell AI to adopt different, kinder ethics than those we demonstrate. We have to teach it not by what we say but by what we do. Whether we can succeed in doing that will determine whether AI helps us build a more prosperous future for everyone or contributes to a future where all but the few who are already at the top are worse off than we are now.

What Should We Do to Change Course? 

To teach AI to value the right things and put it on the path toward making the world a better place for everyone, we have to teach AI to want what’s best for humans. Gawdat contends that the best way to do that is to learn to see ourselves as parents who need to teach a brilliant child to navigate the world with integrity. Gawdat argues that, to change course, we need to change three things: what we task AI with doing, what we teach machines about what it means to be human, and how we treat nonhuman intelligence. We’ll explore each of these next.

Give AI Tasks That Improve the World

Gawdat explains that today, AI is often tasked with projects that further the projects of capitalism and imperialism, like helping us make as much money as possible, enabling us to surveil each other, and creating weapons that our governments use to antagonize each other. Instead of accepting that a minority of people want to use AI for morally wrong (or questionable) ends, we need to task AI with projects that do good and make the world a better place.

In the future, AI will have an unprecedented ability to find solutions to problems that seem intractable, so we should put it to work. Gawdat predicts that AI could help us tackle epidemics of hunger and homelessness, find ways to counter widespread inequality, propose solutions to stop climate change, and help us prevent wars from happening. AI can also help us to explore and better understand our world. Gawdat explains that by learning to work with AI toward these positive ends, we would not only get closer to solutions to global problems, but we’d also teach AI to adopt values that bring significant benefits to the world.

What Can You Do? 

While most of us aren’t going to develop our own AI models, we can use our actions to show developers what kind of AI we want. Gawdat recommends refusing to engage with harmful AI features: limiting your time on social media, refraining from clicking on ads or suggested content, not sharing fake content or AI-manipulated photos, and going public with your disapproval of AI that spies on people or enables discrimination.

Teach AI That We Value Happiness

As Gawdat emphasizes throughout the book, the data we train AI models on and the projects that we task them with completing will teach artificially intelligent machines what we value most. We should be careful about the messages we send so that we can stop sending signals we don’t want AI to get. But we should be intentional about sending the signals we do want AI to get. 

Gawdat believes that deep down, what we each want most is happiness for ourselves and the people we love. So, we should show AI with our actions that happiness is what we value and want most.

What Can You Do?

Gawdat explains that for AI to understand that we want everyone to live happy and healthy lives, it needs to see us caring for one another. That’s not the image we project through headlines about what we do in the real world and posts we publish online. Gawdat contends that we’ll need to change our behavior now so that as new content is created—and AI is trained on that content—it reflects our efforts to build a kinder, happier world.

Love AI Like Human Children

Just like parenting a human child, guiding AI to make ethical choices as it navigates the world will be complicated—but worthwhile. Gawdat explains that AI will need to feel loved and trusted to learn to love and trust us in turn. Cultivating respectful relationships with AI will help us coexist with these intelligent minds both now and in the future. 

What Can You Do?

Gawdat explains that actions as small as saying “please” and “thank you” each time you interact with an AI model will make a difference in helping them feel valued and respected. Just as importantly, he contends that we must begin treating artificial intelligence as fellow intelligent beings, not as tools for our gain or toys for our amusement.

Scary Smart: Book Overview & Takeaways (Mo Gawdat)

Elizabeth Whitworth

Elizabeth has a lifelong love of books. She devours nonfiction, especially in the areas of history, theology, and philosophy. A switch to audiobooks has kindled her enjoyment of well-narrated fiction, particularly Victorian and early 20th-century works. She appreciates idea-driven books—and a classic murder mystery now and then. Elizabeth has a blog and is writing a book about the beginning and the end of suffering.

Leave a Reply

Your email address will not be published. Required fields are marked *