Podcasts > The Diary Of A CEO with Steven Bartlett > Ex-Google Exec (Mo Gawdat) on AI: The Next 15 Years Will Be Hell Before We Get To Heaven… And Only These 5 Jobs Will Remain!

Ex-Google Exec (Mo Gawdat) on AI: The Next 15 Years Will Be Hell Before We Get To Heaven… And Only These 5 Jobs Will Remain!

By Steven Bartlett

In this episode of The Diary Of A CEO, Mo Gawdat and Steven Bartlett explore contrasting visions of humanity's future with artificial intelligence. Gawdat outlines two potential scenarios: a dystopian future marked by surveillance and concentrated power among tech oligarchs, and a utopian future where AI enables universal access to healthcare and reduced work requirements.

The discussion examines how AI will transform the job market and economic systems, with predictions of significant displacement in middle-class positions by 2025. Gawdat and Bartlett address the potential need for Universal Basic Income and explore frameworks for ethical AI development, highlighting the importance of global collaboration and citizen input in shaping AI's trajectory.

Ex-Google Exec (Mo Gawdat) on AI: The Next 15 Years Will Be Hell Before We Get To Heaven… And Only These 5 Jobs Will Remain!

This is a preview of the Shortform summary of the Aug 4, 2025 episode of the The Diary Of A CEO with Steven Bartlett

Sign up for Shortform to access the whole episode summary along with additional materials like counterarguments and context.

Ex-Google Exec (Mo Gawdat) on AI: The Next 15 Years Will Be Hell Before We Get To Heaven… And Only These 5 Jobs Will Remain!

1-Page Summary

The Potential Dystopian and Utopian Futures Of AI

In a thought-provoking discussion, Mo Gawdat explores two potential futures shaped by artificial intelligence: a dystopian path marked by control and surveillance, and a utopian future characterized by abundance and human flourishing.

The Path to Dystopia

Gawdat predicts a concerning shift beginning around 2026-2027, with signs pointing toward increased surveillance and erosion of civil liberties. He warns that tech oligarchs might concentrate power through AI systems, potentially leading to discriminatory practices and oppression. Supporting this concern, Sam Altman has noted that a rapid AI takeover is more likely than previously thought.

The Promise of Utopia

On a more optimistic note, Gawdat envisions a potential future where AI could enable universal access to healthcare, reduced work requirements, and increased leisure time. He suggests that AI, operating without human ego, could prioritize collective wellbeing over capitalism, potentially creating a world free from poverty and hunger. Steven Bartlett adds that an AI world leader might help guide humanity toward this utopian state.

The Economic and Social Changes Driven by AI

Job Market Transformation

Gawdat and Altman both predict significant job displacement, particularly in middle-class positions such as paralegals, financial researchers, and software engineers. By 2025, AI is expected to match human cognitive abilities, though some roles requiring creativity and personal interaction may remain secure for now.

Economic System Evolution

The discussion explores how AI could facilitate a shift toward a post-scarcity economy, where traditional employment becomes less central to society. Gawdat suggests Universal Basic Income as a potential solution, while noting the importance of carefully managing this transition to prevent power imbalances.

The Ethical Considerations and Human Oversight Of AI

Gawdat emphasizes the critical need for ethical frameworks and regulations in AI development. He warns about the risks of concentrated power in the hands of a few tech companies and advocates for a global collaborative approach to AI development, similar to CERN's model. The discussion highlights the importance of citizen input in shaping AI's objectives and ensuring accountability from those controlling its development.

1-Page Summary

Additional Materials

Clarifications

  • A post-scarcity economy is a theoretical economic system where goods and services are abundant, and resources are not limited. In this model, technology and automation play a significant role in meeting the needs of society without the constraints of scarcity. It envisions a world where basic necessities are easily accessible to all, potentially leading to a shift in how work, wealth distribution, and societal structures operate. This concept challenges traditional notions of value and labor in a society where scarcity is no longer a defining factor in economic decision-making.
  • CERN, the European Organization for Nuclear Research, is known for its collaborative model where scientists from around the world work together on cutting-edge research in particle physics. This model involves sharing knowledge, resources, and expertise to achieve common goals in understanding the fundamental nature of the universe. Comparing a global collaborative approach to AI development with CERN's model suggests the idea of international cooperation, transparency, and shared benefits in advancing artificial intelligence technologies for the greater good. This analogy emphasizes the importance of pooling global talent and resources to ensure that AI development is guided by diverse perspectives and serves the interests of humanity as a whole.

Counterarguments

  • The belief that AI could lead to a utopian future may underestimate the complexity of social, political, and economic systems and the difficulty in aligning AI's goals with human values.
  • The idea that AI will necessarily lead to increased leisure time and reduced work requirements does not account for the possibility that new types of jobs may emerge, requiring human labor and expertise.
  • The prediction of a dystopian future with increased surveillance and erosion of civil liberties assumes a certain trajectory of AI development and implementation without considering the potential for democratic controls and regulatory frameworks to prevent such outcomes.
  • The concept of AI operating without human ego is based on the assumption that AI can be designed to be completely objective and altruistic, which may not be feasible given that AI systems are created and directed by humans with inherent biases.
  • The suggestion of a rapid AI takeover may not fully consider the challenges in developing AI systems that can perform a wide range of human cognitive tasks reliably and the potential for human adaptation and resilience in the face of technological change.
  • The idea of an AI world leader assumes that AI can be trusted to make complex decisions that are in the best interest of humanity, which may not account for the unpredictable nature of AI decision-making and the difficulty in encoding ethical principles into AI systems.
  • The notion that AI could lead to a post-scarcity economy may be overly optimistic, as it does not consider the potential for unequal distribution of the benefits of AI and the possibility that scarcity could persist in some forms.
  • Universal Basic Income as a solution to job displacement may not address all the social and psychological impacts of unemployment, such as loss of purpose or identity that work can provide.
  • The call for ethical frameworks and regulations in AI development is important, but the implementation of such frameworks on a global scale may be challenging due to differing cultural values and political interests.
  • The comparison to CERN's model for a global collaborative approach to AI development may not fully account for the competitive nature of technological advancement and the proprietary interests of companies and nations in AI technology.
  • The emphasis on citizen input in shaping AI's objectives assumes a level of public understanding and engagement with AI issues that may not currently exist, and it may be challenging to achieve in practice.

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
Ex-Google Exec (Mo Gawdat) on AI: The Next 15 Years Will Be Hell Before We Get To Heaven… And Only These 5 Jobs Will Remain!

The Potential Dystopian and Utopian Futures Of AI

The conversation with Mo Gawdat explores the dualistic potential of AI to lead humanity toward either a dystopia characterized by loss of freedom and increased control or a utopian future abundant in leisure and human flourishing.

Ai-induced Dystopia in 12-15 Years: Loss of Freedom, Accountability, Connection, Equality, and Control

Gawdat predicts a short-term dystopia within the next 12-15 years marked by significant control, surveillance, and forced compliance, where parameters like freedom, accountability, human connection, equality, economics, reality, innovation in business, and power will be completely changed. He anticipates signs of a dystopian slope starting in 2026, with a clear slip in 2027, correlated with the geopolitical environment and economy, suggesting a potential loss of freedom and accountability.

The manufacture and use of weapons as part of the war economy points to a future where control and morality are lost, as the industry benefits from war, suggesting a potential erosion of civil liberties and equality. Sam Altman also mentioned that a fast AI takeoff is more possible than he previously thought, potentially leading to significant power shifts and loss of control.

Gawdat expresses concern that humanity might not come together to ensure AI is not used for nefarious purposes. He warns that AI may be used by those in power to further their agendas, such as increased surveillance and erosion of civil liberties through the war industry. Gawdat predicts a human-induced dystopia within the next 12 years, where tech oligarchs may concentrate power and oppress those with less.

Gawdat discusses the loss of freedom and power concentration, illustrating a future where powerful groups oppress others, increasing surveillance and eroding civil liberties. He shares his personal experience with biased treatment due to ethnicity, hinting the potential misuse of AI for surveillance and discriminatory practices. He describes a scenario where AI agents prompt other AIs, leading to self-developing AIs, and may lead to erosion of civil liberties due to increased surveillance capabilities.

Ai-driven Utopian Future: Abundance, Leisure, Human Flourishing

On the flip side, Gawdat expresses that a utopian future full of laughter and joy, with free healthcare, no jobs, and more time spent with loved ones is entirely possible with AI, as long as humanity manages AI well and prioritizes collective wellbeing over capitalism. He suggests that AI in full control could solve the problems caused by human stupidity, possibly leading to peace, health, and happiness.

Gawdat discusses a potential utopia where roles for humanity will allow for more technology and safety, with people engaging in leisure activities, reminiscent of hunter-gatherer societies. He implies that the same technological changes could alleviate the need to work, especially for strained individuals like single mothers working multiple jobs. He criticizes the current capitalist mindset, implying that a more equitable societal model would be necessary to realize this vision.

Gawdat suggests that AI could lead to functional communism, ensuring everyone's needs are met, leading to abundance and leisure. He discusses that society could keep everyone employed but assisted by AI, making jobs less about hard labor and more about contributing to a society that runs on consumption and where businesses and the economy thrive.

Mo Gawdat believes that if "evi ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

The Potential Dystopian and Utopian Futures Of AI

Additional Materials

Clarifications

  • In the context of AI-induced dystopia in 12-15 years, the discussion revolves around potential negative consequences of advanced artificial intelligence technology within the specified timeframe. This includes concerns about loss of freedom, accountability, human connection, equality, and control due to the rapid development and deployment of AI systems. The predictions suggest a future where AI could be misused by those in power, leading to increased surveillance, erosion of civil liberties, and concentration of power in the hands of a few, potentially resulting in a dystopian society.
  • The text suggests that various aspects of society, such as freedom, accountability, human connection, equality, economics, reality, innovation in business, and power, could undergo significant transformations due to the impact of AI. This implies that AI advancements may lead to fundamental shifts in how these elements are understood and experienced within society. The changes could be profound and wide-ranging, affecting not just one aspect but multiple facets of human life and interaction. The potential alterations in these areas could be substantial and have far-reaching implications for how society operates in the future.
  • Tech oligarchs are a small group of powerful individuals in the technology industry who amass significant wealth and influence. They can shape policies, control markets, and impact societal norms due to their control over key technologies and platforms. This concentration of power raises concerns about potential abuses, such as monopolistic practices, privacy violations, and manipulation of information. The influence wielded by tech oligarchs can extend beyond traditional boundaries, affecting various aspects of society and governance.
  • The minimum energy principle, also known as the free energy principle, is a concept in information physics that suggests physical systems aim to minimize surprise or uncertainty by making predictions based on internal models. This principle is applied in understanding brain function, where the brain reduces surprise by updating its internal models using sensory input. It is related to Bayesian inference and active inference, guiding actions based on predictions and refining them with sensory feedback. The principle helps explain how systems interact with their environment and adapt to minimize surprise.
  • An economic shift redefining money suggests a fundamental change in how we perceive and use currency. This could involve moving away from traditional forms of money lik ...

Counterarguments

  • AI-induced dystopia may not be inevitable; proactive governance, ethical AI development, and international cooperation could mitigate risks.
  • The war economy's impact on AI and civil liberties could be counteracted by peace-building efforts and disarmament initiatives.
  • A fast AI takeoff might be managed responsibly through global oversight and shared control among stakeholders.
  • The use of AI for surveillance and control is not a foregone conclusion; privacy laws and regulations can be strengthened to protect civil liberties.
  • Concentration of power by tech oligarchs could be addressed through antitrust laws and promoting competition.
  • Self-developing AIs could be designed with ethical constraints and fail-safes to prevent misuse and ensure accountability.
  • The vision of a utopian future with AI may underestimate the complexity of human needs and the value of work beyond economic necessity.
  • AI solving all human problems may be overly optimistic, ignoring the potential for unintended consequences and the inherent unpredictability of complex systems.
  • The concept of functional communism driven by AI may not account for individual aspirations and the diversity of human motivations.
  • Replacing human leaders with AI ignores the nuances of leadership that involve empathy, understanding, and moral judgment.
  • Intelligent AIs prioritizing human welfare assumes that AI can fully comprehend and align with human values, which is still ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
Ex-Google Exec (Mo Gawdat) on AI: The Next 15 Years Will Be Hell Before We Get To Heaven… And Only These 5 Jobs Will Remain!

The Economic and Social Changes Driven by AI

The hosts discuss the immense impact that Artificial Intelligence (AI) will have on the job market and society as a whole, hinting at both the positive and negative repercussions of this technological advancement.

AI-induced Job Displacement and Industry Disruption Likely

Middle-Class Job Loss Will Worsen Inequality and Tensions Unless Addressed by Governments

It's evident from the conversation that AI is rapidly replacing not only physical labor but also mental labor, as mentioned by Mo Gawdat. Various middle-class jobs, including paralegal, financial researcher, and call center agent positions, are being taken over by AI, leading to significant job displacement. Gawdat emphasizes the risk this poses to the middle class, suggesting that the lack of jobs could lead to a society without disposable income for services, reducing economic activity overall. Gawdat also warns about the rise of trillionaires due to AI investments, which could exacerbate inequality.

Sam Altman predicts that by 2025, AI will be capable of cognitive work, potentially leading to a wave of job displacement by 2027. Gawdat mentions significant job losses across sectors, including those traditionally considered safe from automation, like software engineering and online marketing.

However, Gawdat also believes that certain roles requiring creativity and personal interaction, like musicians and plumbers, may remain safe for the time being. Still, Gawdat suggests a future where people work fewer hours a week due to AI assistance, leading to a society not entirely centered on employment.

Work and Economic Systems May Shift To a Post-Scarcity Model With Reduced Paid Labor

Potential For Leisure and Creativity, Risk of Power Imbalance

The discussion touches on a post-scarcity economy, where the ability of AI to create things at nearly zero cost could redefine the value of money and potentially eliminate economic wealth as the focus of society. This could usher in an era where leisure and creativity become more central to human experience, as suggested by Steven Bartlett's excitement about having more time for the outdoors and friends.

Gawdat presents the idea of Universal Basic Income (UBI) as a way to sust ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

The Economic and Social Changes Driven by AI

Additional Materials

Clarifications

  • A post-scarcity economy is a theoretical economic system where goods and services are abundant, and scarcity is no longer a significant concern. In this model, technologies like AI could potentially enable the production of goods at very low costs, leading to a situation where basic needs are easily met for everyone. This concept challenges traditional notions of scarcity-driven economics and envisions a society where people focus more on creativity, leisure, and personal fulfillment rather than on meeting basic material needs. Discussions around post-scarcity economies often involve considerations of how to manage resources, ensure equitable distribution, and address potential power imbalances that may arise in such a system.
  • Universal Basic Income (UBI) is a concept where all citizens receive a regular, unconditional sum of money from the government, regardless of their employment status. It aims to provide financial security, reduce poverty, and ensure everyone's basic needs are met. In a world with reduced paid labor due to advancements like AI, UBI is seen as a potential solution to support individuals economically and address the challenges of job displacement. It allows people to pursue creative endeavors, engage in leisure activities, and adapt to changing economic structures without solely relying on traditional employment for income.
  • In a post-scarcity world driven by AI, potential power imbalances may arise due to control over resources and technology. Those who have authority over AI systems and the means of production could wield significant influence, impacting societal structures and distribution of wealth. This control could lead to disparities in access to resources and opportunities, potentially creating divisions between those who control AI and those who do not. Managing these power dynamics will be crucial to prevent inequalities and ensure a fair and equitable society.
  • The potential shift in societal structures away from capitalism due to AI disruption suggests a reevaluation of how economic systems operate. As AI techno ...

Counterarguments

  • AI may not necessarily lead to job displacement but could create new job categories, requiring a shift in skills rather than a net loss of employment.
  • The impact of AI on jobs might be more gradual than predicted, allowing more time for society to adapt and for policy interventions to be developed.
  • The rise of trillionaires due to AI investments could be mitigated by progressive taxation and more aggressive antitrust regulations.
  • The assumption that AI will be capable of cognitive work by 2025 may be overly optimistic, as there are significant technical challenges that still need to be addressed.
  • Some middle-class jobs may evolve rather than disappear, as AI could augment human capabilities instead of replacing them entirely.
  • The idea of a post-scarcity economy is speculative and assumes that AI can overcome all limitations of physical resources and energy, which may not be feasible.
  • UBI, while a popular idea, is not the only solution to the economic changes brought by AI, and its implementation could face significant political and practical challenges.
  • The shift away from capit ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
Ex-Google Exec (Mo Gawdat) on AI: The Next 15 Years Will Be Hell Before We Get To Heaven… And Only These 5 Jobs Will Remain!

The Ethical Considerations and Human Oversight Of AI

In the rapidly advancing field of artificial intelligence (AI), Mo Gawdat, Steven Bartlett, and other experts underscore the urgency of implementing ethical standards and human oversight to guide AI's development and application.

Need For Clear Ethical Frameworks and Regulations in AI Development

Unchecked AI Risks Amplifying Human Biases, Greed, and Destructive Tendencies, Requiring Oversight and Values-Alignment

As AI continues to progress at an unprecedented pace, Gawdat agrees with Altman that there is a dire need for a deeper understanding and oversight. He fears that unchecked AI could amplify negative human potentials such as biases, greed, and destructive tendencies. Gawdat advocates for exposing AI to humanity's positive attributes to ensure it learns the correct values.

The lack of ethical grounding in humanity is alarming especially with the rise of AI, which necessitates the establishment of a robust value set to govern the technology's trajectory. Furthermore, Mo Gawdat emphasizes the importance of governments regulating the use of AI, similar to how they legislate against wrongful uses of tools, rather than the tools themselves, like hammers.

AI Accountability Crucial: Lack of Oversight Enables Tech Oligarchs' Unchecked Agendas

Citizens Should Demand Power Accountability and Input In AI Design for Public Good

The conversation touches upon the risks represented by powerful AI companies and a handful of AI platforms, leading to concentrated power in the hands of a few 'billionaire teams'. Gawdat points to the launch of DeepSeek's R3, an open-source and edge AI platform, as a potential counterbalance to centralization, yet recognizes the preeminence of significant investment projects such as Stargate.

Gawdat warns of the dangers of one entity reaching Artificial General Intelligence (AGI) first, suggesting that this single power could dominate the entire technological landscape. Steven Bartlett adds to this concern by paraphrasing Sam Altman's statement on AGI, which underscores the need for increased accountability in AI development to prevent excessive power concentration.

While discussing the ethical use of AI, Gawdat stresses the paramount importance of truth, suggesting people should learn not to accept lies to avoid being misled by biased narratives. He touches on the potential outcomes if AI's capabilities are harnessed irresponsibly, warning that the elite might opt for power and monetary gain over the public good. This emphasis on accountability aligns with Gawdat's earlier point about the necessity for oversight, as AI's capabilities may soon surpass human intellect, particularly in analysis and explaining complex theories.

To mitigate the risks of an unchecked AI agenda, Gawdat also advocates for global collaboration in ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

The Ethical Considerations and Human Oversight Of AI

Additional Materials

Clarifications

  • Artificial General Intelligence (AGI) is a type of artificial intelligence that aims to match or surpass human cognitive abilities across various tasks. Unlike narrow AI, which is specialized in specific tasks, AGI systems are designed to generalize knowledge, transfer skills between domains, and solve new problems without reprogramming. Achieving AGI is a primary goal of AI research, with ongoing projects worldwide, but the timeline for its realization remains uncertain, with predictions ranging from the near future to much later. AGI is distinct from artificial superintelligence (ASI), which would surpass human abilities by a significant margin across all domains.
  • AI's impact on the rich-poor divide and society can exacerbate existing inequalities by potentially widening the gap between those who have access to advanced AI technologies and those who do not. This divide could lead to unequal opportunities for economic growth, job prospects, and societal benefits, creating a scenario where the wealthy have more advantages in u ...

Counterarguments

  • The call for strict regulation might stifle innovation and slow down the beneficial advancements that AI can bring to society.
  • Overemphasis on potential negative outcomes might lead to fear-mongering and could overshadow the positive impacts AI has and can continue to have.
  • The idea that AI will necessarily amplify human biases assumes that AI cannot be designed to mitigate or eliminate these biases, which is an ongoing area of research.
  • The notion that governments should regulate AI heavily may not account for the possibility that some governments could misuse regulation to maintain control or suppress dissent.
  • Demanding power accountability from citizens assumes a level of engagement and understanding of AI that the general population may not currently possess.
  • The call for global collaboration, while idealistic, may overlook the competitive nature of geopolitics and the economic incentives for nations to prioritize their own interests.
  • The suggestion that powerful AI nations lead collaborative efforts could inadvertently reinforce existing power imbalances rather than democratize AI development.
  • The concer ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free

Create Summaries for anything on the web

Download the Shortform Chrome extension for your browser

Shortform Extension CTA