A woman using a desktop computer at a desk in front of a window illustrates the ethical use of AI

How does artificial intelligence shape the information we consume? What steps can we take to ensure AI becomes a force for good rather than harm?

In his book Nexus, Yuval Noah Harari explores the ethical use of AI and its potential impact on humanity. The real danger, he contends, isn’t in the technology itself but in how humans choose to use it, particularly when decisions are based on unreliable information or manipulation.

Keep reading to explore how we can harness this powerful technology for humanity’s benefit while avoiding its potential pitfalls.

The Ethical Use of AI

Harari concludes his book with a call for the ethical use of AI. The existential threat of artificial intelligence, Harari argues, doesn’t come from malevolent computers but from human decision-making. While we often hear that technology itself poses the danger—that we repeatedly create tools with the potential to destroy us—Harari sees the core problem differently. The real risk lies in how humans choose to use these powerful new tools, especially when we make those choices based on bad information.

This insight shifts the focus from AI itself to the human systems that control it. Harari warns that, if paranoid dictators or terrorists gain unlimited power over AI systems, catastrophic consequences could follow. But these outcomes aren’t inevitable; they depend entirely on human decisions about how to develop and deploy the technology. 

Harari’s conclusion is ultimately hopeful: If we can understand the true impact of our choices about AI—and ensure those choices are based on reliable information rather than manipulation or misinformation—we can harness this powerful technology to benefit humanity rather than harm it. The key is not to fear AI itself, but to be thoughtful and intentional about how we choose to use it. Like any tool, we can use AI to achieve positive or negative ends, and we have to prioritize making choices that will benefit humanity, not destroy it. 

How Can We Use AI Wisely?

Other experts join Harari in seeing AI as a powerful tool that can be used or misused. Geoffrey Hinton, whose groundbreaking work on neural networks and deep learning algorithms earned him the 2024 Nobel Prize in Physics, believes advanced AI systems could pose ethical dilemmas, regulatory challenges, and inherent risks. He lists a number of possible dangers: AI could surpass human intelligence, develop its own goals, learn unexpected behaviors from the vast datasets it analyzes, or be misused for malicious purposes, like manipulating elections or fighting wars.

Hinton advocates international regulation to address these risks. He suggests curbing the development of potentially dangerous AI systems in the same way we ban chemical weapons.Hinton’s departure from Google and his warnings about AI echo the misgivings of other scientists who have expressed regret or apprehension about the potential misuse of their creations—such as Albert Einstein and J. Robert Oppenheimer, whose work contributed to the creation of the atomic bomb. 

But fortunately, unlike nuclear weapons (which atomic scientists agree have a singular, destructive purpose), generative AI can be beneficial if used wisely. Hinton believes the timeline for artificial general intelligence (AGI) may be much shorter than previously thought; it might emerge in decades rather than centuries. This means governments, tech companies, and the public must act now to create regulations, ethical guidelines, and oversight mechanisms that can help us harness AI’s benefits while protecting against its risks.

Exercise: Analyze Your Information Diet

Our information consumption habits shape how we understand the world. Use these questions to examine how AI might be influencing your perspective through the content you consume.

  1. Think about where you get most of your news and information. List your top three sources (social media platforms, news sites, and so on). For each one, does an AI algorithm determine what content you see?
  2. Pick one controversial topic you’ve formed a strong opinion about recently. Where did you encounter most of the information that shaped your view? How might AI-driven content curation have influenced what information you saw—or didn’t see—about this topic?
  3. What specific steps could you take to diversify your information sources and reduce AI’s influence over what you read and believe?
The Ethical Use of AI Can Protect Us From AI’s Dangers (Harari)

Elizabeth Whitworth

Elizabeth has a lifelong love of books. She devours nonfiction, especially in the areas of history, theology, and philosophy. A switch to audiobooks has kindled her enjoyment of well-narrated fiction, particularly Victorian and early 20th-century works. She appreciates idea-driven books—and a classic murder mystery now and then. Elizabeth has a blog and is writing a book about the beginning and the end of suffering.

Leave a Reply

Your email address will not be published. Required fields are marked *