
What happens when artificial intelligence starts controlling the stories we tell and share? How does AI’s growing influence over information pose a bigger threat than job displacement or robot uprisings?
Yuval Noah Harari’s Nexus: A Brief History of Information Networks From the Stone Age to AI explores how AI is transforming the way humans share and consume information. He examines historical patterns of information technology to reveal how AI’s control over our cultural narratives could reshape civilization itself.
Keep reading to discover why having access to more information doesn’t necessarily lead to better understanding—and what we can do to maintain human agency in an AI-driven world.
Overview of Yuval Noah Harari’s Nexus
Yuval Noah Harari’s Nexus: A Brief History of Information Networks From the Stone Age to AI (2024) cuts through the typical artificial intelligence (AI) doomsday scenarios to identify a more subtle but potentially more devastating threat: AI’s growing power to control how humans share and consume information. While others warn about AI taking our jobs or turning hostile, Harari argues that AI’s true danger lies in its ability to manipulate the stories we tell—the very foundation of human civilization.
Drawing on historical examples from the printing press to social media algorithms, Harari demonstrates how changes in information technology have repeatedly transformed society, sometimes with catastrophic consequences. He argues that AI represents the most significant transformation yet, as it may soon surpass humans at our most distinctive capability: creating and sharing the stories that give our world meaning. At the heart of Harari’s concern is a profound shift in who controls our information: For the first time in human history, AI—not humans—increasingly determines what stories we encounter and share.
Harari is a historian and philosopher with a Ph.D. from the University of Oxford. His other books include Sapiens—a sweeping history of humankind that has sold over 25 million copies—Homo Deus, and 21 Lessons for the 21st Century. In Nexus, he argues that information is the social nexus that links everyone within a society, enabling us to cooperate and interact on a large scale by sharing ideas, beliefs, and experiences.
In this overview, we’ll explore Harari’s key insights about information, truth, and social order, while examining how information networks have evolved throughout history. We’ll also analyze AI’s emerging role in these networks and consider Harari’s proposed solutions for maintaining human agency in an AI-dominated information landscape.
(Shortform note: What is AI, exactly? Experts say there isn’t just one definition, which gives the field room to evolve. That said, you can think of AI—which enables computers to do things that require “intelligence”—as existing on a spectrum. Consider the software that enables you to talk to Siri on your iPhone and the model that can beat world champions at the complex game Go. These two forms of AI differ widely in the scale of the tasks they complete, their autonomy, and the broadness or narrowness of their skills. All AI models differ along these dimensions, making what they can do look more or less like human intelligence.)
What Is Information?
We’ll begin with Harari’s definition of information. He explains that information is knowledge that connects and organizes people: It’s the stories, beliefs, and ideas that can transform a random assortment of individuals into a cohesive social group united behind a common cause. Importantly, these stories don’t need to be true to be powerful—in fact, most information isn’t objectively true at all.
Harari explains that the disconnect between information and truth can actually be beneficial: The stories we share can create social bonds, instill hope, encourage optimism, and inspire people to work together to achieve great things. But Harari’s key insight is that we avidly consume and share information based on how compelling the story is, not on whether it reflects reality. And some of the most attention-grabbing, emotionally moving stories are demonstrably untrue.
Harari says that history shows, again and again, that having access to more information doesn’t necessarily lead to a better understanding of the world or wiser decision-making. But before we examine how this paradox has played out throughout human history, let’s take a closer look at why we don’t place a higher value on the truth of the information we share.
Why Don’t We Care Whether Information Is True?
Harari explains that there are multiple kinds of truth or reality, which helps explain why we often don’t question whether information is objectively true. He identifies three distinct types: First, there’s objective reality—the reality we can prove with the laws of physics and the facts of the world. An objective reality is true whether or not anyone is aware of it or believes it. Second, there’s subjective reality, which exists only if someone believes it. Third, there’s intersubjective reality, which emerges when a story is believed by a large network of people and exists in the communication and collaboration between them. For an intersubjective reality, it doesn’t matter whether the story is true: When enough people believe in it, it can influence the world.
According to Harari, intersubjective reality forms the foundation of many things we believe in, like our nations, economies, religions, and ideologies. This is how we give power to the institutions that bring order to our world, like governments, social hierarchies, or the scientific establishment: by buying into the stories they tell and accepting the vision of reality that emerges from those stories. Harari points out that what we’re looking for when we seek information about the world isn’t the truth at all, but a compelling story that helps us make sense of our place in society.
Whoever controls our cultural stories—and the conversation around them—gains tremendous social power. But Harari explains that, right now, for the first time in human history, it’s not humans controlling the conversation. Instead, AI increasingly determines what we read about, think about, and talk about. While humans still decide what’s on the evening news or the front page of the newspaper, that’s not true of many of our most popular sources of information. The video at the top of your TikTok feed or the post you see first when you open Facebook is decided by an AI-powered algorithm. This marks a major shift in how information moves through society.
How Does Information Move Through Society?
Throughout human history, information and power have moved together. Harari explains that, each time a new technology has made information more readily accessible, it has fundamentally reshaped society. First, written language, inscribed on stone or clay tablets, enabled our ancestors to keep records and codify their rules of government. Next, books produced by hand—on tablets, scrolls, parchment, and papyrus—enabled large bodies of knowledge on law, history, religion, and other topics to be shared in writing and over time and distance, instead of just orally and person-to-person. Then, the printing press enabled the widespread dissemination of information and therefore the democratization of knowledge.
Harari explains that this democratization of knowledge had unexpected consequences that reveal important lessons about how new information technologies can transform society.
Technology Makes Information—Good and Bad—Easier to Access
When new technology makes it easier to share information, Harari explains, it accelerates the spread of both truth and lies. The invention of the movable-type printing press offers a striking illustration of this principle. While historians often celebrate how the printing press enabled the Scientific Revolution by spreading new ideas about experimental methods, quantitative thought, and rigorous inquiry, its first major impact was far darker: It supercharged the spread of dangerous misinformation.
The printing press didn’t immediately usher in an era of scientific thought. In fact, 200 years passed between the invention of movable type and the real beginning of the Scientific Revolution. Long before scientists like Galileo and Copernicus used the printing press to share new kinds of scientific thought and codify novel methods of gathering knowledge about the world, one of Europe’s first bestsellers was the Malleus Maleficarum (or Hammer of Witches), a manual for hunting witches written by German inquisitor Heinrich Kramer. The book promoted a conspiracy theory that witches were part of a Satan-led campaign to destroy humanity.
By distributing copies of this text across Europe, Kramer spread his superstitious ideas, which he falsely represented as the position of the Catholic Church. His paranoid and misogynistic claims about women being vulnerable to demonic influences gained widespread acceptance. Witchcraft came to be seen as the highest of crimes and the gravest of sins, leading to centuries of brutal witch hunts that claimed tens of thousands of lives.
Just like today’s bestsellers, the Malleus Maleficarum reveals what sorts of ideas captured people’s attention when the new technology of the movable-type printing press emerged. But more importantly, its success illustrates one of Harari’s key observations: Making information easier to access doesn’t guarantee that truth or wisdom will prevail. While Kramer’s superstitious ideas might sound laughable today, the potential for extreme messages to manipulate people’s thinking and whip societies into the kind of frenzy that fueled the witch hunts remains. In fact, Harari argues that this potential is exactly what makes AI—our newest revolution in sharing information—so dangerous.
Societies Have to Balance Truth Against Order in Controlling the Flow of Information
The world looks different now than it did when there were witch hunts in Europe, but society’s underlying mechanisms for spreading ideas—true and false—are still basically the same. Harari calls these mechanisms “information networks”: They’re a fundamental structure underlying our society, and they’re made up of groups of people who share stories that spread the truth (or circulate misinformation) and create order (or engender chaos).
In managing the flow of information among people, social groups have a choice to make: Do they want to prioritize the spread of truth, or control the flow of information to maintain social order? What we should hope for, Harari explains, is information networks that can help us strike a balance between truth and order. Enabling a flow of information that errs too far on the side of one or the other can have disastrous consequences, as we’ll discuss below.
What Happens When We Value Truth More Than Order?
As we saw during the Scientific Revolution, human society can flourish when we seek the truth. It pushes human thought forward when we’re open to questioning long-held beliefs and replacing disproven information with updated observations. But, Harari notes, a tradeoff typically occurs: An emphasis on truth comes at the expense of order. The perception that the facts are changing can be destabilizing. For example, Galileo’s discovery of the heliocentric nature of our solar system upended the religious societies of Renaissance Europe. Likewise, Darwin’s theory of evolution threw the Victorian-era understanding of the natural world into chaos.
What Happens When We Value Order More Than Truth?
On the other hand, if a society considers order its highest value, it can take control of the flow of information to achieve that end. (If you manipulate the flow of information, you can manipulate what people think and do.) Unlike in a democracy—where information is shared freely with citizens so they can fact-check it and correct errors and falsehoods, even those put forth by the state—a dictatorship doesn’t want open conversation. Authoritarian regimes selectively promote ideas without regard for whether they’re demonstrably true or untrue. The logic goes that if knowledge becomes too freely available, then the stories the regime is built upon could be thrown into doubt and potentially rejected by the state’s citizens.
How Is AI Changing Our Relationship With Information?
Harari explains that we’re living in an “information age,” where knowledge is proliferating, access to information is democratized, and everyone with a smartphone and internet access can share their ideas with the world. As we develop tools like AI, we’re increasing the speed at which stories can be shared. When you consider what makes a human society free, equitable, or democratic, having more information sounds like an inherent good. (An especially American expression of this is the idea, written down by Thomas Jefferson, is that a well-informed electorate plays a vital role in keeping authorities in check and guarding against tyranny.)
But, counter to that notion, Harari worries that recent developments that make information more accessible to us threaten to tip the balance toward the most extreme, least truthful, and most divisive messages.
Because humans are wired to seek out a good story rather than to pursue the truth, putting AI in a position to determine what ideas we’re exposed to could have potentially disastrous consequences. Harari identifies three main dangers: AI’s disregard for truth, its ability to manipulate and polarize us, and its potential to surpass human understanding of the world. For each of these threats, he offers specific recommendations for how we can maintain human agency and control over our information landscape.
Danger #1: Since We Don’t Prioritize the Truth, AI Doesn’t Either
Scientists have made it possible to build AI models that can generate language and tell stories just like humans. Harari contends that AI’s ability to create compelling stories and produce the illusion of emotions (and emotional intimacy) is where its real danger lies. When we talk to an AI-powered chatbot like ChatGPT, it’s easy to lose sight of the fact that these systems aren’t human and don’t have a vested interest in telling the truth. That will only become harder to recognize as AI gets better and better at mimicking human emotions—and creating the illusion that it thinks and feels like we do. So it will only become easier for us to lose sight of the fact that AI isn’t prioritizing the truth when it selects and generates information for us.
Harari says that AI already influences what information we consume: An algorithm—a set of mathematical instructions that tell a computer what to do to solve a problem—chooses what you see on a social network or a news app. Facebook’s algorithm, for example, chooses posts to maximize the time you spend in the app. The best way for it to do that is not to show you stories that are true, but content that provokes an emotional reaction. So it selects posts that make you angry, fuel your animosity for people who aren’t like you, and confirm what you already believe about the world. That’s why social media feeds are flooded with fake news, conspiracy theories, and inflammatory ideas. Harari thinks this effect will only become more pronounced as AI is curating and creating more of the content we consume.
How to Fix It: Pay Attention to What’s True
Harari argues that we need to take deliberate steps to tilt the balance in favor of truth as AI becomes more powerful. While his proposed solutions are somewhat abstract, he emphasizes two main approaches: being proactive about highlighting truthful information and maintaining decentralized networks where information can flow freely among institutions and individuals who can identify and correct falsehoods.
Danger #2: We’re Becoming Easier to Manipulate and Polarize
Harari warns that, as AI increasingly controls what information we see, algorithms will push us toward more extreme ideas and greater polarization. We can already see this happening with today’s algorithmically stoked outrage and clickbait-fueled misinformation. Harari believes the problem will only intensify as AI becomes more sophisticated and commercialized, and he predicts AI systems will create, interpret, and spread stories without human intervention. One system might select pieces of information, another spin that information into a story, and yet another determine which stories to show to which users. This will leave us increasingly vulnerable to manipulation by AI systems and the corporations that control them.
Harari explains this represents a significant shift in power: The ability to set the cultural agenda and shape public discourse—traditionally the domain of newspaper editors, book authors, and intellectuals—will increasingly belong to AI systems optimized not for truth or social cohesion, but for engagement and profit.
How to Fix It: Build Institutions to Help People Understand What AI Is Doing
To counter AI’s growing influence over public opinion, Harari calls for the creation of new institutions to monitor artificial intelligence and inform the public about its capabilities and risks. He argues that we shouldn’t let tech giants regulate themselves. While his vision for these oversight institutions remains abstract, he suggests they should function somewhat like today’s free press or academic institutions, serving as independent watchdogs that can help the public understand and evaluate AI’s decisions and actions. Harari frames this as primarily a political challenge, arguing that we need the collective will to establish these safeguards.
Danger #3: We’re Setting AI Up to Understand the World Better Than We Do
Harari warns that we’re creating AI systems that will soon surpass human capabilities in understanding and manipulating the shared stories that organize our societies. This shift represents a real danger: While humans have traditionally maintained power through our unique ability to create and control these shared fictions—like laws, money, and social institutions—AI is poised to eclipse us at our own game.
The root of this problem lies in human nature. We often lack the patience and attention span to dig deep into complex truths, preferring simpler stories that are easier to grasp. AI systems, in contrast, can process vast amounts of information, and work together in ways humans can’t—while one AI system analyzes market trends, another can simultaneously study legal documents, and thousands more can coordinate to spot patterns across these different domains. They can comprehend intricate systems—like legal codes and financial markets—far better than most humans can. They can even create entirely new frameworks that go beyond human understanding. This capability gap marks an unprecedented shift in power.
For tens of thousands of years, humans have been the sole architects of our information networks, generating and sharing the ideas that shape our societies. But as AI systems become more sophisticated, we’ll increasingly rely on them to process information and make decisions. When we delegate decisions, we also surrender our understanding of the information that drives them—potentially giving up our position as the primary shapers of human society.
How to Fix It: Focus on Maintaining Human Agency
Harari believes to deal with this transition, we must develop new frameworks to maintain human agency and ethical guardrails. He explains we should consider training AI systems to express self-doubt, seek human feedback, and acknowledge their own fallibility—essentially equipping them with a self-awareness of the limits of their knowledge. He also recommends that we use AI to augment human decision-making instead of replacing it, which would help retain human values and oversight.
The Real Risk: How Humans Choose to Use AI
The existential threat of artificial intelligence, Harari argues, doesn’t come from malevolent computers but from human decision-making. While we often hear that technology itself poses the danger—that we repeatedly create tools with the potential to destroy us—Harari sees the core problem differently. The real risk lies in how humans choose to use these powerful new tools, especially when we make those choices based on bad information.
This insight shifts the focus from AI itself to the human systems that control it. Harari warns that if paranoid dictators or terrorists gain unlimited power over AI systems, catastrophic consequences could follow. But these outcomes aren’t inevitable; they depend entirely on human decisions about how to develop and deploy the technology.
Harari’s conclusion is ultimately hopeful: If we can understand the true impact of our choices about AI—and ensure those choices are based on reliable information rather than manipulation or misinformation—we can harness this powerful technology to benefit humanity rather than harm it. The key is not to fear AI itself, but to be thoughtful and intentional about how we choose to use it. Like any tool, we can use AI to achieve positive or negative ends, and we have to prioritize making choices that will benefit humanity, not destroy it.
I am great to read this messages
This is a great general summary for quickly understanding the intent of the book. You touch key points. Thanks for this.