In "AI Utopia" from the Making Sense podcast, Sam Harris and Nick Bostrom reflect on the surprising emergence of sophisticated language models in AI before achieving general superintelligence. They explore the challenges of developing advanced AI systems without proper isolation, or "air-gapping," and the ongoing concerns about AI alignment and the moral status of digital minds.
Bostrom shares his continuing worries about mishandling superintelligence, while Harris expresses puzzlement at prominent AI experts dismissing potential risks. They also discuss the philosophical implications of a fully automated, "solved world" and the unease of having all problems comprehensively solved, challenging society to redefine purpose and meaning.
Sign up for Shortform to access the whole episode summary along with additional materials like counterarguments and context.
Nick Bostrom reflects on the unexpected emergence of sophisticated language models in AI before achieving general superintelligence. He and Sam Harris describe humanity riding an untamed, powerful force, an unpredictable trajectory heavily influenced by technology.
Contrary to common presumptions, Harris notes that advanced AI systems are currently being developed without the precaution of "air-gapping" - isolating them from external networks. Bostrom acknowledges the convenience of internet connectivity but raises uncertainty about potential air-gapping during model training phases.
Bostrom emphasizes his ongoing worries about AI alignment, with a shift from technical failure to broader governance challenges and the moral status of digital minds. Mishandling superintelligence is deemed an existential risk.
Harris expresses puzzlement at prominent AI experts dismissing risks of unaligned superintelligence, which seems unfounded. Bostrom notes a quasi-religious attitude conceiving AI as a means to create greater beings.
Though productivity rose as predicted by Keynes, Bostrom notes the anticipated transition to greatly reduced workweeks and expansive leisure has not fully materialized.
Bostrom and Harris discuss the philosophical unease of a "solved world" with rampant automation, challenging society to redefine purpose and meaning. They explore counterintuitive reactions to comprehensively solving problems like aging and the paradox of celebrating incremental progress while recoiling at eliminations collectively.
1-Page Summary
Nick Bostrom and Sam Harris delve into current developments in AI and offer insights into the surprising progression and potential risks of this technology.
Nick Bostrom reflects on an unexpected trend in AI development: the emergence of sophisticated language models prior to the achievement of general superintelligence. This challenges the previous assumption that powerful, versatile AI systems would first require overcoming the technical hurdle of creating an artificial general intelligence. Instead, AI systems have achieved a high degree of language proficiency without epitomizing what has been traditionally conceptualized as superintelligence.
Additionally, Bostrom discusses the broader influence of technology on human thought and civilization's trajectory. He and Sam Harris describe humanity as riding an untamed, powerful force—akin to a "chaotic beast"—that no one truly controls. This metaphor captures the emergent behavior of a society under the strong influence of technology and culture, pointing to an unpredictable and possibly tumultuous future, especially with the advent of advanced AI.
Discussing safety precautions in AI development, Harris highlights that contrary to many discussions on AI safety, advanced AI systems are not isolated from the internet ("air-gapped"), a measure that would prevent them from connecting to external networks and pote ...
The current state and trajectory of AI development
The field of artificial intelligence (AI) faces critical concerns regarding AI alignment. Experts in the area offer varying opinions on the potential risks associated with superintelligent systems and their alignment with human values.
Bostrom emphasizes his continued worries about AI alignment, stating that the field now has more specificity in terms of risks compared to a decade ago. He underlines a shift from technical alignment failure to broader governance challenges and the moral status of beings created through AI. He deems both the failure of developing superintelligence and the mishandling of superintelligent entities existential risks, suggesting that proper AI alignment and governance are crucial.
Harris and Bostrom discuss the controllability of AI as a non-biological entity, hinting that there may be more precision in AI engineering than in shaping biological life. However, the uncertainty in AI alignment primarily stems from the intrinsic difficulty of the challenges, not the effort to solve them. The difficulty of technical problems relating to AI alignment may result in catastrophic consequences if they prove intractable.
There's a clear puzzlement expressed by Harris about the stance of some leading AI experts on the risks of unaligned superintelligence. He is particularly bemused by their dismissals, which seem unfounded and lacking persuasive counterarguments, indicating a disparity in fundamental intuitions about the risks.
Similarly, Bostrom reflects on the quasi-religious attitude of people towards the conception of AI as a pathway to creating greater beings. Such perspectives may overshadow the importance of technological governance and control.
Harris notes the belated realization among certain experts, such as contributions to deep learning, about the risks of AI alignment, a concern that has been long-standing in ...
The problem of AI alignment and differing expert perspectives on the risks
Sam Harris introduces Nick Bostrom's new book, "Deep Utopia, Life and Meaning in a Solved World," delving into the concept of a world where AI could solve all our problems, creating a stark contrast to previously held fears about AI. Bostrom's book, along with Harris's discussions, probes into the philosophical and societal challenges that arise when confronting such a prospect.
Nick Bostrom recalls economist John Maynard Keynes's prediction that productivity would increase four to eightfold over a hundred years, leading to a transition to a leisure society with significantly reduced work hours. While Bostrom confirms that productivity has risen, the anticipated societal shift towards dramatically shorter workweeks and expansive leisure time has not fully realized. Contemporary economic activity still heavily relies on human labor, and while we've seen decreases in working hours with longer education periods, earlier retirement, and more maternity/paternity leave, it falls short of Keynes's vision of a four-hour workweek.
Sam Harris and Nick Bostrom engage in a discussion about the unease that can stem from contemplating a "solved world." In such a future, work could become voluntary, with many human activities automated. This radical shift prompts a reexamination of our very purpose and meaning, provoking a sensation akin to the "uncanny valley," a counterintuitive reaction to excessive progress. Bostrom describes a "solved world" as one characterized by technological maturity and where fair and peaceful political conditions are set, potentially requiring a significant cultural adjustment and changes in societal norms.
Bostrom also talks about the potential repulsiveness and counter-intuitiveness of such a world, looking at what values might endure and what wo ...
The concept of a "solved world" and its philosophical/ethical implications
Download the Shortform Chrome extension for your browser