An AI robot and a human child looking at each other illustrates the balance of AI and human agency

What happens when artificial intelligence becomes better than humans at understanding our social systems and stories? How can we maintain control over AI as it grows more powerful?

In his book Nexus, Yuval Noah Harari explores the complex relationship between AI and human agency. He warns that AI systems are advancing rapidly, potentially surpassing our ability to comprehend and shape the social structures that organize our world.

Keep reading to discover how we can maintain human agency while harnessing the benefits of AI technology.

AI and Human Agency

In his book, Harari discusses AI and human agency. He posits that maintaining human agency is the safeguard against advanced AI technology.

Harari warns that we’re creating AI systems that will soon surpass human capabilities in understanding and manipulating the shared stories that organize our societies. This shift represents a real danger: While humans have traditionally maintained power through our unique ability to create and control these shared fictions—such as laws, money, and social institutions—AI is poised to eclipse us at our own game. 

The root of this problem lies in human nature. We often lack the patience and attention span to dig deep into complex truths, preferring simpler stories that are easier to grasp. AI systems, in contrast, can process vast amounts of information, and work together in ways humans can’t—while one AI system analyzes market trends, another can simultaneously study legal documents, and thousands more can coordinate to spot patterns across these different domains. They can comprehend intricate systems—like legal codes and financial markets—far better than most humans can. They can even create entirely new frameworks that go beyond human understanding. This capability gap marks an unprecedented shift in power. 

(Shortform note: While Harari warns that AI will soon surpass human capabilities in understanding complex systems and narratives, some AI researchers clarify that a major obstacle is AI’s lack of common-sense reasoning and understanding of context, which humans take for granted. For example, AI may fail to differentiate between a white shirt and a white wall, mistakes that seem silly to us but reveal the challenges AI faces in making educated assumptions and responding appropriately to familiar situations. Researchers are now trying to solve this problem by studying how humans process emotions and make decisions, combining insights from AI and cognitive science.)

Harari writes that, for tens of thousands of years, humans have been the sole architects of our information networks, generating and sharing the ideas that shape our societies. But, as AI systems become more sophisticated, we’ll increasingly rely on them to process information and make decisions. When we delegate decisions, we also surrender our understanding of the information that drives them—potentially giving up our position as the primary shapers of human society. (Shortform note: Experts say AI can augment our decision-making by sifting through mountains of data to spot patterns invisible to humans. From financial markets to climate science, this enables AI to provide insights that could solve our most complex challenges.)

(Shortform note: Many experts worry that, as AI learns to understand our world, it’s adeptly picking up our biases and prejudices—for example, it misidentifies people of color with facial recognition AI—and might threaten social justice by reproducing the social inequalities it observes. AI’s “algorithmic bias” could pose a disproportionate risk to women, undermine LGBTQ identity, and put transgender people at particular risk. These injustices might be even worse if AI leaves us feeling disconnected from each other, as happened in the 2008 film WALL-E, where humans have left Earth and live on a spaceship piloted by robots. By putting AI in charge, the humans in the movie lose focus on what’s happening in our world.)

How to Fix It: Focus on Maintaining Human Agency

Harari believes that, to deal with this transition, we must develop new frameworks to maintain human agency and ethical guardrails. He explains we should consider training AI systems to express self-doubt, seek human feedback, and acknowledge their own fallibility—essentially equipping them with a self-awareness of the limits of their knowledge. He also recommends that we use AI to augment human decision-making instead of replacing it, which would help retain human values and oversight. 

(Shortform note: Can AI really express self-doubt? Experts say that dealing with uncertainty through probabilistic techniques can make AI smarter by allowing it to express confidence levels in predictions and avoid overconfident mistakes, which is crucial for high-stakes applications like autonomous vehicles.)

(Shortform note: Experts say the extent to which we can maintain human agency as AI becomes more powerful depends heavily on whether we establish appropriate governance and ethical frameworks—and whether AI developers prioritize giving users control and transparency in AI decision-making processes. Experts contend that AI developers must strive to give users control over AI systems through adjustable settings, explainable decisions, and override options to make sure that humans stay informed and in control. But they worry some industries and companies may opt for more autonomous AI decision-making without human intervention, which may reduce individual control.)

AI and Human Agency: A Delicate Balance (Harari)

Elizabeth Whitworth

Elizabeth has a lifelong love of books. She devours nonfiction, especially in the areas of history, theology, and philosophy. A switch to audiobooks has kindled her enjoyment of well-narrated fiction, particularly Victorian and early 20th-century works. She appreciates idea-driven books—and a classic murder mystery now and then. Elizabeth has a blog and is writing a book about the beginning and the end of suffering.

Leave a Reply

Your email address will not be published. Required fields are marked *