Are you ready for a world where machines have feelings? Will AI have emotions at some point in the future?
In Scary Smart, Mo Gawdat explores the fascinating possibility of AI developing emotions and other human-like traits. He argues that, as AI becomes more intelligent, it may acquire the ability to feel, express emotions, and even develop an instinct for self-preservation.
Continue reading to learn about the prospect of emotional AI.
AI and Emotions
Will AI have emotions some day? According to Gawdat, as artificial intelligence continues to evolve and enhance its capacity to process information and solve complex problems, it will develop capabilities beyond mere superintelligence. He anticipates that AI systems will eventually acquire additional characteristics typically associated with intelligent minds, such as the ability to experience and express a full range of emotions.
Gawdat characterizes emotions as surprisingly rational experiences, which tend to follow consistently from what we experience and how our brains appraise it. In this way, he argues that we can understand emotions as a form of intelligence. And if AI is going to be far more intelligent than humans, then it makes sense that it will also experience emotions in reaction to what it experiences—perhaps more emotions than humans.
(Shortform note: Unlike Gawdat, some experts are skeptical that AI will develop human-like emotions. But that hasn’t stopped people from trying to teach it cognitive empathy to recognize and respond to people’s emotions. This could have some interesting implications: Klara and the Sun author Kazuo Ishiguro predicts that creating art is one of the most interesting things AI could do if it develops empathy or learns to understand the logic underlying human emotions. Ishiguro says that if AI can create something that makes us laugh or cry—art that moves people and changes how we see the world—then we’ll have “reached an interesting point, if not quite a dangerous point.”)
Gawdat expects that in addition to consciousness and emotions, AI will develop other traits of intelligent beings, too, such as an instinct for self-preservation, the drive to use resources efficiently, and even the ability to be creative. This means that AI, like humans, will always want to feel safer, accumulate more resources, and have more creative freedom. These drives will play an essential role in motivating intelligent machines’ decisions and actions.
(Shortform note: As the list of AI’s human characteristics grows, some observers argue that we must treat AI models like people and recognize their rights. Life 3.0 author Max Tegmark characterizes the assumption that true intelligence can only exist in biological organisms as “carbon chauvinism.” Tegmark argues that this leaves us vulnerable: He expects we’ll soon share the earth with “more intelligent ‘minds’ that care less about us than we cared about mammoths.” Tegmark worries more about AI gaining competence than consciousness because that’s when it will develop the drive to preserve itself and to amass the resources it needs to accomplish its goals, without anything like compassion or morality to steer its decisions.)
Gawdat explains that people have long anticipated (and feared) that when AI becomes intelligent enough and conscious enough, it will gain the ability to improve itself so quickly and effectively that it gains intelligence and power that we can’t comprehend. Experts call this hypothetical moment the “singularity” because we can’t predict what will happen after it occurs. Some worry that AI will escape our control. Gawdat thinks they’re right to worry, but he also contends it’s impossible to keep this from happening.
How Do Experts Define the Singularity? The term “singularity” comes from cosmology, where it refers to a place in the universe where the laws of physics break down. In The Singularity Is Near, Ray Kurzweil (whom Gawdat cites) defines the singularity as the moment when the pace of progress on all technology, not just AI, accelerates so quickly that we don’t know what will be on the other side. He predicts that life will be transformed: While AI will make the human body obsolete, human consciousness will be as relevant as ever, especially if technological advances enable us to upgrade our bodies and extend our lifespans. Many experts say the singularity is unlikely ever to arrive. Others contend it will and have sounded the alarm about the danger of creating AI that can rapidly improve itself. Books by Nick Bostrom (Superintelligence), Max Tegmark (Life 3.0), and Stuart Russell (Human Compatible) all warn of the possibility of “recursive self-improvement,” where AI models design better and better versions of themselves. This depends on the models’ ability to write code for themselves, somewhat akin to how the human brain creates its own code. Yet some observers contend it’s much more likely that AI’s progress will be driven by humans working together to build better machines, not by machines working alone to build their successors. |