
Can AI algorithms really control what stories and ideas shape our beliefs? What happens when machines—not humans—become the primary creators and curators of our cultural narratives?
In his book Nexus, Yuval Noah Harari explores how AI manipulation is fundamentally changing who controls public discourse. The shift from human editors to profit-driven AI systems marks a crucial turning point in how information spreads and shapes society.
Keep reading to discover what this means for our future and the concrete steps we can take to protect ourselves from unchecked AI influence.
AI Manipulation
Harari warns that, as AI increasingly controls what information we see, algorithms will push us toward more extreme ideas and greater polarization. We can already see this happening with today’s algorithmically stoked outrage and clickbait-fueled misinformation. Harari believes the problem will only intensify as AI becomes more sophisticated and commercialized, and he predicts AI systems will create, interpret, and spread stories without human intervention. One system might select pieces of information, another spin that information into a story, and yet another determine which stories to show to which users. This will leave us increasingly vulnerable to AI manipulation and the corporations that control AI systems.
(Shortform note: Farhad Manjoo, author of True Enough, has long argued that algorithms increase polarization by promoting an echo chamber effect and eroding trust in documentary evidence. Algorithms enable the spread of misinformation and manipulative narratives by allowing people to selectively consume information that aligns with their existing beliefs and biases. The abundance of information sources online has not led to more rational, fact-based discourse. Instead, documentary proof seems to have lost its power as people filter evidence through their own biases and conspiracy theorists can cherry-pick information to fit their preferred narratives.)
Harari explains this represents a significant shift in power: The ability to set the cultural agenda and shape public discourse—traditionally the domain of newspaper editors, book authors, and intellectuals—will increasingly belong to AI systems optimized not for truth or social cohesion, but for engagement and profit.
(Shortform note: From the 1950s through the 1980s, newspaper editors and television news anchors like Walter Cronkite wielded enormous influence over public discourse. This era of concentrated media influence—when editors shaped national conversations—stands in contrast to today’s fragmented media landscape. Now, with countless online news sources, social media platforms, and AI-curated content feeds, no single editorial voice carries the same weight. Instead, we have a cacophony of voices that often drowns out traditional journalism’s authority. Aaron Sorkin’s show The Newsroom (2012-2014) nostalgically depicts this loss, following a fictional news anchor’s struggle to reclaim journalism’s role as a trusted source of truth.)
How to Fix It: Build Institutions to Help People Understand What AI Is Doing
To counter AI’s growing influence over public opinion, Harari calls for the creation of new institutions to monitor artificial intelligence and inform the public about its capabilities and risks. He argues that we shouldn’t let tech giants regulate themselves. While his vision for these oversight institutions remains abstract, he suggests they should function somewhat like today’s free press or academic institutions, serving as independent watchdogs that can help the public understand and evaluate AI’s decisions and actions. Harari frames this as primarily a political challenge, arguing that we need the collective will to establish these safeguards.
(Shortform note: Harari’s call for oversight coincides with the 2025 arrival of DeepSeek, a Chinese startup that built two AI models that rival those from the best American labs, but which were trained with innovative techniques to make them more efficient in terms of both cost and computing power. This development renewed concerns about safety and fueled urgent calls for regulation and transparent oversight, given that there aren’t international laws restricting AI development. Many think it’s not possible to stop progress, but that hasn’t stopped them from trying, as when 30,000 people—including Harari—signed a 2023 open letter calling for a moratorium on training powerful AI systems.)