Yuval Noah Harari (AI thought leader) at the 2024 Frankfurt Book Fair

What’s really happening when AI systems choose the information we see online? How can we maintain control over technology that’s becoming better than humans at understanding our world?

In his book Nexus, Yuval Noah Harari explores how artificial intelligence is reshaping our relationship with information. From social media algorithms to sophisticated language models, AI systems are increasingly determining what we read, watch, and believe.

Read more to get Yuval Noah Harari’s AI insights and to better understand how to navigate this changing landscape.

Image credit: Martin Kraft via Wikimedia Commons (License)

Yuval Noah Harari on AI

According to Yuval Noah Harari, AI must be understood in the context of our era. He explains that we’re living in an “information age,” where knowledge is proliferating, access to information is democratized, and everyone with a smartphone and internet access can share their ideas with the world. As we develop tools such as AI, we’re increasing the speed at which stories can be shared. When you consider what makes a human society free, equitable, or democratic, having more information sounds like an inherent good. (An especially American expression of this is the idea, written down by Thomas Jefferson, is that a well-informed electorate plays a vital role in keeping authorities in check and guarding against tyranny.) 

But, counter to that notion, Harari worries that recent developments that make information more accessible to us threaten to tip the balance toward the most extreme, least truthful, and most divisive messages.

(Shortform note: Even before AI entered the scene, some experts questioned whether our increasing access to information is an inherent good. Early internet idealists envisioned a digital utopia where open access to knowledge would lead to more informed, rational discourse that disrupted monopolies of information. True to form, the “old internet”—characterized by decentralized blogs and forums—fostered niche communities and a diversity of perspectives. However, the “new internet” is dominated by profit-driven platforms that Kyle Chaka, author of Filterworld, argues “flatten culture” by concentrating the flow of information in particular ways. As a result, new monopolies of information are emerging—and undermining the internet’s democratic potential by steering public discourse toward homogenized content.)

Because humans are wired to seek out a good story rather than to pursue the truth, putting AI in a position to determine what ideas we’re exposed to could have potentially disastrous consequences. Harari identifies three main dangers: AI’s disregard for truth, its ability to manipulate and polarize us, and its potential to surpass human understanding of the world. For each of these threats, he offers specific recommendations for how we can maintain human agency and control over our information landscape.

Danger #1: Since We Don’t Prioritize the Truth, AI Doesn’t Either

Scientists have made it possible to build AI models that can generate language and tell stories just like humans. Harari contends that AI’s ability to create compelling stories and produce the illusion of emotions (and emotional intimacy) is where its real danger lies. When we talk to an AI-powered chatbot such as ChatGPT, it’s easy to lose sight of the fact that these systems aren’t human and don’t have a vested interest in telling the truth. That will only become harder to recognize as AI gets better and better at mimicking human emotions—and creating the illusion that it thinks and feels like we do. So it will only become easier for us to lose sight of the fact that AI isn’t prioritizing the truth when it selects and generates information for us.

Harari says that AI already influences what information we consume: An algorithm—a set of mathematical instructions that tell a computer what to do to solve a problem—chooses what you see on a social network or a news app. Facebook’s algorithm, for example, chooses posts to maximize the time you spend in the app. The best way for it to do that is not to show you stories that are true, but content that provokes an emotional reaction. So it selects posts that make you angry, fuel your animosity for people who aren’t like you, and confirm what you already believe about the world. That’s why social media feeds are flooded with fake news, conspiracy theories, and inflammatory ideas. Harari thinks this effect will only become more pronounced as AI is curating and creating more of the content we consume.

How to Fix It: Pay Attention to What’s True

Harari argues that we need to take deliberate steps to tilt the balance in favor of truth as AI becomes more powerful. While his proposed solutions are somewhat abstract, he emphasizes two main approaches: being proactive about highlighting truthful information and maintaining decentralized networks where information can flow freely among institutions and individuals who can identify and correct falsehoods.

Danger #2: We’re Becoming Easier to Manipulate and Polarize

Harari warns that, as AI increasingly controls what information we see, algorithms will push us toward more extreme ideas and greater polarization. We can already see this happening with today’s algorithmically stoked outrage and clickbait-fueled misinformation. Harari believes the problem will only intensify as AI becomes more sophisticated and commercialized, and he predicts AI systems will create, interpret, and spread stories without human intervention. One system might select pieces of information, another spin that information into a story, and yet another determine which stories to show to which users. This will leave us increasingly vulnerable to manipulation by AI systems and the corporations that control them.

Harari explains this represents a significant shift in power: The ability to set the cultural agenda and shape public discourse—traditionally the domain of newspaper editors, book authors, and intellectuals—will increasingly belong to AI systems optimized not for truth or social cohesion, but for engagement and profit.

How to Fix It: Build Institutions to Help People Understand What AI Is Doing

To counter AI’s growing influence over public opinion, Harari calls for the creation of new institutions to monitor artificial intelligence and inform the public about its capabilities and risks. He argues that we shouldn’t let tech giants regulate themselves. While his vision for these oversight institutions remains abstract, he suggests they should function somewhat like today’s free press or academic institutions, serving as independent watchdogs that can help the public understand and evaluate AI’s decisions and actions. Harari frames this as primarily a political challenge, arguing that we need the collective will to establish these safeguards.

Danger #3: We’re Setting AI Up to Understand the World Better Than We Do

Harari warns that we’re creating AI systems that will soon surpass human capabilities in understanding and manipulating the shared stories that organize our societies. This shift represents a real danger: While humans have traditionally maintained power through our unique ability to create and control these shared fictions—such as laws, money, and social institutions—AI is poised to eclipse us at our own game.

The root of this problem lies in human nature. We often lack the patience and attention span to dig deep into complex truths, preferring simpler stories that are easier to grasp. AI systems, in contrast, can process vast amounts of information, and work together in ways humans can’t—while one AI system analyzes market trends, another can simultaneously study legal documents, and thousands more can coordinate to spot patterns across these different domains. They can comprehend intricate systems—such as legal codes and financial markets—far better than most humans can. They can even create entirely new frameworks that go beyond human understanding. This capability gap marks an unprecedented shift in power.

For tens of thousands of years, humans have been the sole architects of our information networks, generating and sharing the ideas that shape our societies. But as AI systems become more sophisticated, we’ll increasingly rely on them to process information and make decisions. When we delegate decisions, we also surrender our understanding of the information that drives them—potentially giving up our position as the primary shapers of human society.

How to Fix It: Focus on Maintaining Human Agency

Harari believes to deal with this transition, we must develop new frameworks to maintain human agency and ethical guardrails. He explains we should consider training AI systems to express self-doubt, seek human feedback, and acknowledge their own fallibility—essentially equipping them with a self-awareness of the limits of their knowledge. He also recommends that we use AI to augment human decision-making instead of replacing it, which would help retain human values and oversight.

The Real Risk: How Humans Choose to Use AI

The existential threat of artificial intelligence, Harari argues, doesn’t come from malevolent computers but from human decision-making. While we often hear that technology itself poses the danger—that we repeatedly create tools with the potential to destroy us—Harari sees the core problem differently. The real risk lies in how humans choose to use these powerful new tools, especially when we make those choices based on bad information.

This insight shifts the focus from AI itself to the human systems that control it. Harari warns that if paranoid dictators or terrorists gain unlimited power over AI systems, catastrophic consequences could follow. But these outcomes aren’t inevitable; they depend entirely on human decisions about how to develop and deploy the technology.

Harari’s conclusion is ultimately hopeful: If we can understand the true impact of our choices about AI—and ensure those choices are based on reliable information rather than manipulation or misinformation—we can harness this powerful technology to benefit humanity rather than harm it. The key is not to fear AI itself, but to be thoughtful and intentional about how we choose to use it. Like any tool, we can use AI to achieve positive or negative ends, and we have to prioritize making choices that will benefit humanity, not destroy it.

Yuval Noah Harari: AI Dangers, Solutions, & the Real Risk

Elizabeth Whitworth

Elizabeth has a lifelong love of books. She devours nonfiction, especially in the areas of history, theology, and philosophy. A switch to audiobooks has kindled her enjoyment of well-narrated fiction, particularly Victorian and early 20th-century works. She appreciates idea-driven books—and a classic murder mystery now and then. Elizabeth has a blog and is writing a book about the beginning and the end of suffering.

Leave a Reply

Your email address will not be published. Required fields are marked *