Podcasts > Acquired > Google: The AI Company

Google: The AI Company

By Ben Gilbert and David Rosenthal

In this Acquired episode, hosts explore how Google has evolved into a major player in artificial intelligence, tracing the company's AI journey from Larry Page's early vision through to its current position. The discussion covers Google's strategic moves in AI development, including its recruitment of key researchers, the acquisition of DeepMind, and the development of custom AI hardware through Tensor Processing Units (TPUs).

The episode examines Google's response to recent AI developments, particularly following ChatGPT's emergence, which led to significant organizational changes and new initiatives like Bard and Gemini. The hosts analyze how Google's extensive infrastructure and data resources provide advantages in AI development, while also considering how AI assistants could affect the company's traditional search-based revenue model.

Listen to the original

Google: The AI Company

This is a preview of the Shortform summary of the Oct 5, 2025 episode of the Acquired

Sign up for Shortform to access the whole episode summary along with additional materials like counterarguments and context.

Google: The AI Company

1-Page Summary

The Historical Foundations of AI at Google

The history of AI at Google was shaped by Larry Page's early vision, influenced by his father's pioneering work in AI during the 1960s and 70s. During the 2000s, Google strategically concentrated AI talent by hiring prominent researchers like Ilya Sutskever, Jeff Hinton, and Sebastian Thrun. This concentration of expertise led to groundbreaking developments in language models and, in 2014, the strategic acquisition of DeepMind for $550 million, which significantly boosted Google's AI capabilities.

Key AI Technologies and Breakthroughs Developed at Google

Google's Brain team revolutionized AI with the development of the Transformer architecture in 2017, which dramatically improved machine translation and laid the foundation for modern large language models. To support these advancing technologies, Google engineer Jonathan Ross initiated the development of Tensor Processing Units (TPUs), custom chips designed specifically for neural network operations. These TPUs gave Google a significant hardware advantage, reducing reliance on external providers and enabling efficient scaling through Google Cloud.

Google's Response to External AI Developments and Competition

Following ChatGPT's unprecedented success, Google CEO Sundar Pichai issued a "code red" in December 2022, leading to the rapid development of Bard and a major reorganization. Under this restructuring, Google merged its Brain and DeepMind teams under Demis Hassabis's leadership and launched the Gemini initiative, aiming to standardize AI development across Google's products and services.

Business and Strategic Implications of AI for Google

According to Ben Gilbert and David Rosenthal, Google's extensive infrastructure provides a unique advantage in AI development. The company's $50 billion cloud infrastructure and in-house TPU production allow for cost-effective AI deployment at scale. However, they note that AI assistants could potentially disrupt Google's traditional search-based ad revenue model. In response, Google is balancing its core search business while investing in new AI products, leveraging its vast data resources from services like Gmail, Maps, and Android to create personalized AI experiences.

1-Page Summary

Additional Materials

Counterarguments

  • While Larry Page's vision and his father's influence are noted, it's also true that many other individuals and external factors contributed to the development of AI at Google, suggesting that the vision for AI was not solely or uniquely driven by Page's family background.
  • The strategic concentration of AI talent at Google indeed brought in top researchers, but it could also be argued that this created a competitive job market and potentially contributed to a brain drain from academia or other sectors.
  • The acquisition of DeepMind was a significant boost for Google's AI capabilities, but it could be critiqued that such consolidations may reduce diversity in AI research and concentrate power within a few large tech companies.
  • The Transformer architecture was a major advancement, but it's also worth noting that it requires substantial computational resources, which could limit its accessibility for smaller entities and contribute to environmental concerns.
  • The development of TPUs provided Google with a hardware advantage, but reliance on proprietary hardware could also lock users into Google's ecosystem and potentially stifle competition and innovation in the broader hardware market.
  • Google's response to ChatGPT's success with a "code red" and the rapid development of Bard might be seen as reactive rather than proactive, suggesting that Google may not have been fully prepared for the advancements made by competitors.
  • The merging of Brain and DeepMind teams and the Gemini initiative could streamline AI development at Google, but it could also lead to centralization of control and decision-making, which might stifle creativity and innovation within the teams.
  • Google's infrastructure and in-house TPU production are indeed advantages, but this could also be critiqued as creating a high barrier to entry for competitors, potentially leading to less competition and innovation in the AI field.
  • The potential disruption of Google's ad revenue model by AI assistants is a valid concern, but it could also be argued that Google's diversification into AI might not fully compensate for the loss of revenue if traditional search becomes less relevant.
  • Google's use of data from services like Gmail, Maps, and Android for personalized AI experiences raises privacy concerns, and there could be criticism about the adequacy of user consent and transparency in how data is used for AI development.

Actionables

  • Explore your family history to find inspiration for your career or hobbies by researching the professions and interests of your relatives, as Larry Page was influenced by his father's work. You might discover a field or subject that resonates with you, which could guide your learning or career choices. For example, if you find out a relative was involved in early computer science, you could start exploring coding through free online resources.
  • Stay informed about emerging technologies to identify investment or learning opportunities by following news on acquisitions and breakthroughs in industries that interest you, similar to how Google's acquisition of DeepMind signaled a significant advancement in AI. This could mean subscribing to newsletters, attending webinars, or participating in community discussions related to these technologies, which could help you spot trends and opportunities early on.
  • Leverage existing resources to learn new skills by using platforms you already engage with, akin to how Google uses data from Gmail, Maps, and Android. For instance, if you use a cloud storage service, explore its additional features for organizing and automating tasks, or if you're an Android user, delve into the customization options to streamline your mobile experience and increase productivity.

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
Google: The AI Company

The Historical Foundations of AI at Google

Discussions on a podcast reveal the deep history of AI at Google, driven by the vision of its founders and a strategic concentration of AI talent, laying the foundations for significant breakthroughs in the field.

Google's Founders and Employees Were Interested In AI

Larry Page's Father, an AI Pioneer, Influenced Google's AI Vision

Larry Page, one of Google's co-founders, envisioned Google as fundamentally an artificial intelligence company from the very beginning. His vision was influenced by his father, who was a computer science professor specializing in AI at the University of Michigan. Page's father had conducted research in the field of AI during its first wave in the 1960s and 70s when it was not a popular computer science field. This had a profound impact on Larry's outlook.

Google Hired Top AI Researchers In the 2000s, Concentrating AI Talent

David Rosenthal and Ben Gilbert discuss how Google concentrated AI talent by employing almost every notable person in AI during the 2000s. This includes hiring luminaries such as Ilya Sutskever, Jeff Hinton, and Alex Krizhevsky, not to mention other prominent figures like Dario Amodei, Andrej Karpathy, Andrew Ng, Sebastian Thrun, and Noam Shazir. These hires found fertile ground at Google to advance their groundbreaking work.

Larry Page hired Sebastian Thurn, then the head of Stanford’s AI Lab, to work on machine learning applications. Thurn suggested that Google should bring in AI academics part-time to work on various projects, which the founders agreed to. George Herrick, an engineer with a machine learning PhD from the University of Michigan, advanced a notion that would foreshadow the development of large language models. Jeff Hinton, a machine learning professor from the University of Toronto, was brought in by Thrun to give a tech talk at Google about new work on neural networks.

Google's Investments in AI Laid Foundations For Breakthroughs

Google Engineers Developed Language Models Like "Fill," a Probabilistic Hierarchical Inferential Learner, Powering Google Products

It was mentioned that Google engineers developed groundbreaking language models with applications across Google’s products. These models demonstrate the capability to understand and respond to human language. Jeff Dean is noted to have used the "Fill" language model to implement AdSense in a week, contributing to billions in new revenue and expanded ad inventory. By the mid-2000s, such models were using significant data center resources due to their wide applications, including predicting search queries and improving ad quality scores.

Acquisition of Deepmind in 2014 Boosted AI Expertise and Research Capabilities

The podcast describes a series of events ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

The Historical Foundations of AI at Google

Additional Materials

Counterarguments

  • While Larry Page may have been influenced by his father's work in AI, it's also possible that the practical needs of improving search algorithms and ad targeting played a significant role in Google's focus on AI.
  • The hiring of top AI researchers could be seen not just as concentrating talent but also as monopolizing the field, potentially stifling innovation elsewhere due to the aggregation of resources and talent within one company.
  • The suggestion to bring in AI academics part-time could be critiqued for potentially creating conflicts of interest between academic openness and corporate secrecy.
  • The development of large language models like "Fill" may have had unintended consequences, such as reinforcing biases present in the training data or contributing to privacy concerns.
  • The implementation of AdSense using the "Fill" language model, while profitable, could be criticized for prioritizing ad revenue over user experience or privacy.
  • The use of significant data center resources for language models raises environmental concerns due to the carbon footprint associated with powering large-scale computing infrastructure.
  • The acquisition of DeepMind, while beneficial to Go ...

Actionables

  • You can explore the influence of family on career choices by interviewing relatives about their professions and how those might align with your interests or skills. This could reveal patterns or inspirations in your own career trajectory, similar to how Larry Page was influenced by his father's work in AI.
  • Consider collaborating with experts in a field you're curious about by joining online forums or local clubs where professionals gather. Engaging with them on projects, even in a minor role, can provide insights into their work and potentially inspire innovative ideas, akin to Sebastian Thrun's suggestion for ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
Google: The AI Company

Key AI Technologies and Breakthroughs Developed at Google

Google's relentless pursuit of innovation has led to the creation and refinement of AI technologies that have become foundational in the field.

Google's Development of Transformer Architecture

Transformers by Google: Power of Attention and Scalability Demonstrated

Google's development of the Transformer architecture represents a paradigm shift in the AI field. Before the advent of the Transformer, Google Translate had relied on recurrent neural networks, which were limited by short context windows and did not scale efficiently. In 2017, the Google Brain team published a paper describing the Transformer model, which leveraged attention mechanisms to achieve better parallelization and longer context retention. This mechanism mimics how human translators work, understanding the entire context before translating. Due to its elegance, energy efficiency, and scalability, the Transformer model marked the beginning of a modern AI era.

Transformers Were Key In Advancing Large Language Models

The Transformer architecture was not only a milestone for Google Translate but also set the stage for large language models such as BERT, and later influenced models developed by other organizations, like OpenAI's ChatGPT. Recognizing its potential, Google Brain planned to extend the Transformer architecture beyond text to other modalities, such as images and audio. The Transformer model continued Google's legacy in language models and significantly reduced Google Translate's error rate by 60% when it replaced LSTMs in 2016.

Google Developed the TPU for Efficient AI Training and Inference

Google's TPUs: Custom Chips For Scaling Machine Learning

Anticipating the need for high-volume matrix multiplications required by neural networks, Google embarked on the development of the Tensor Processing Unit (TPU). Initiated by Google engineer Jonathan Ross and materialized in a formal project, the custom ASIC was designed to support neural network operations efficiently and at scale.

Google's TPU Architecture and Deployment Gave a Hardware Edge Over Competitors

The TPUs provided Google with a significant hardware edge. Initially kept top secret, TPUs were fundamental in powering projects like AlphaGo. Over time, TPUs evolved, with versions significantly increasing in efficiency. Google's decision to create TPUs internali ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

Key AI Technologies and Breakthroughs Developed at Google

Additional Materials

Counterarguments

  • While Google's Transformer architecture has been influential, it is not the only paradigm shift in AI; other architectures like RNNs and CNNs have also been pivotal at different stages in AI development.
  • The claim that the Transformer model marked the beginning of a modern AI era could be seen as overlooking other significant contributions and breakthroughs in AI that have occurred independently of the Transformer architecture.
  • The influence of the Transformer architecture on large language models is significant, but it's also important to recognize the contributions of other research and models that have played a role in advancing the field.
  • Google's plan to extend the Transformer architecture to other modalities is ambitious, but it's worth noting that success in one domain (text) does not guarantee success in others (images, audio).
  • The reported 60% reduction in Google Translate's error rate with the adoption of the Transformer model is impressive, but error rates can be influenced by many factors, and such statistics should be interpreted with caution.
  • Google's development of TPUs is a major achievement, but it's also important to acknowledge the role of other hardware accelerators like GPUs and FPGAs in the broader AI hardware ecosystem.
  • TPUs may provide Google with a hardware edge, but this advantage is contingent on continued innovation and competition from other companies that could develop more efficient or cost-effective solu ...

Actionables

  • You can explore the capabilities of AI by using tools like Google Translate to experience firsthand the improvements made by the Transformer model. Try translating phrases between languages you're familiar with and note the accuracy and fluency, reflecting on the 60% error rate reduction mentioned.
  • Experiment with cloud computing by signing up for Google Cloud services to access TPUs and compare their performance to other services using GPUs. Run simple machine learning models using both types of hardware to understand the practical differences in speed and efficiency.
  • Engage with AI-powered apps and services that likely ut ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
Google: The AI Company

Google's Response to External AI Developments and Competition

In the face of burgeoning AI technologies from competitors like OpenAI, Google has taken critical steps to advance its AI development and maintain its position in the market.

ChatGPT and AI Models Threaten Google's Core Business

Since the release of OpenAI's ChatGPT, Google has recognized the serious potential of large language models to disrupt traditional search paradigms. The rapid growth of ChatGPT shows its capability to reshape the way users engage with search engines, challenging Google's core business. Google, aware of the progress within its AI sector, including raw chatbot models like Lambda, responded to the tangible threat by significantly shifting its strategy.

After the launch of ChatGPT, which witnessed unprecedented adoption rates, reaching 1 million users within a week and 100 million by the end of January 2023, Google realized that AI was transitioning from a sustaining innovation to a disruptive one. As such, it posed an existential threat. Microsoft's investment of another $10 billion in OpenAI and its subsequent 100 million registered users for ChatGPT further emphasized the immediacy of the threat.

In response, Sundar Pichai, CEO of Google, issued a "code red" in December 2022, urging the company to expedite the development of native AI products. This urgency led to the release of Bard, which rebranded the Lambda model with a new chatbot interface, following the success of ChatGPT with Bing. However, the initial public release of Bard was met with criticism over its performance.

Google Merges Brain and Deepmind Teams Under one Leader

Sundar Pichai initiated a significant reorganization within Google, directing the merger of Google Brain and DeepMind into a unified team. This decision was aimed at consolidating Google's AI efforts and streamlining capabilities across the company. Demis Hassabis of DeepMind was appointed as the CEO of this integrated AI division, suggesting a concerted effort to centralize Google's AI initiatives.

Following the reorganization, a new high-performance AI system initiative called G ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

Google's Response to External AI Developments and Competition

Additional Materials

Counterarguments

  • Google's strategy shift and "code red" may have been a reactive rather than proactive approach, suggesting a potential oversight in anticipating the impact of AI on search.
  • The release of Bard in response to ChatGPT's success could be seen as a rushed attempt to stay relevant, which may have contributed to its initial criticism.
  • The merger of Google Brain and DeepMind could be criticized for potential centralization risks, such as stifling innovation or creating a single point of failure within Google's AI development.
  • Appointing Demis Hassabis as CEO of the integrated AI division might raise concerns about the diversity of leadership and thought, as it places significant control of AI direction under one individual's vision.
  • The announcement and development of Gemini could be critiqued for possibly being too late in the competitive landscape, as other companies had already demonstrated the effectiveness of their AI models.
  • The focus on Gemini as a standardized model might limit the exploration of diverse AI approaches that could be more beneficial in certain applications.
  • Showcasing Gemini at Google I/O is a typical corporate strategy that might not fully reflect the model's readiness or effectiveness in real-world applications.
  • The aim to centralize efforts and reduce costs with Gemini could be seen as prioritizing efficiency ov ...

Actionables

- You can explore AI advancements by using tools like Bard or ChatGPT to streamline your daily tasks, such as summarizing articles, drafting emails, or creating schedules, to become more efficient and familiar with the capabilities of these systems.

  • By integrating AI into your routine, you'll gain firsthand experience with the technology that's reshaping industries. For example, if you usually spend time summarizing news articles, you could instead input the article into an AI tool and analyze the summary it provides, saving time and increasing productivity.
  • You can enhance your problem-solving skills by posing complex questions to AI systems like Gemini and analyzing their responses to understand how AI approaches problem-solving differently from humans.
  • This practice can broaden your perspective on tackling challenges. For instance, if you're trying to optimize your budget, ask an AI system for suggestions and compare its approach to your own, potentially uncovering new strategies or efficiencies you hadn't considered.
  • You can stay ahead of AI trends by signing up for ea ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
Google: The AI Company

Business and Strategic Implications of AI for Google

As artificial intelligence (AI) continues to evolve, Google is uniquely positioned to benefit from its vast infrastructure and to navigate the challenges AI may pose to its traditional business models.

Google's Infrastructure Might Give It an AI Edge

Ben Gilbert and David Rosenthal unpack how Google’s well-established infrastructure could give the company a significant edge in the competitive AI landscape.

Google's Cost Advantage From Amortizing Model Training Across Users and Services

Google's switch from CPUs to GPUs, orchestrated by Alex Krashevsky, Jeff Hinton, and Ilya Sutskever, led to a transformative $130 million purchase of 40,000 GPUs from Nvidia. This investment, approved by Larry Page, shows Google's understanding of deep learning's strategic importance for its future. Their massive infrastructure has integrated a profound number of GPUs for AI training and inference, providing a likely competitive advantage.

Gilbert emphasizes that Google's unique cloud infrastructure, with revenues of $50 billion, allows the company to spread the substantial costs of model training across its diverse user services. By producing their own Tensor Processing Units (TPUs), and managing all infrastructure in-house, Google achieves a more favorable cost structure for deploying AI at scale.

Running an AI data center incurs substantial costs, primarily from chips and depreciation, but Google stands out as a low-cost provider, with access to low markup hardware. Gilbert mentions that the model training costs are amortized across every Google search, allowing Google to process 10 trillion to nearly 1 quadrillion inference tokens within a surprisingly short period. This vast scale distributes the hardware and training expenses over a significant amount of value creation.

Google's Data Ownership and Personalization Enable Tailored AI Experiences

David Rosenthal notes that Google’s ownership of personalized data from products like Gmail, Maps, Docs, Chrome, and Android allows it to create highly personalized AI products. This edge comes from the fact that Google can use this data in ways that other companies cannot, given they lack access to such abundant and individualized information.

AI Transition May Disrupt Google's Search Model

Despite Google’s strategic preparations for AI, there is a recognized potential disruption to its traditional revenue streams, specifically those tied to web search and advertising.

AI Assistants May Cut Web Searches, Threatening Google's Ad Revenue

The rise of AI-driven products, like OpenAI's ChatGPT, suggests a potential disruption to Google’s search-based ad revenue. Gilbert and Rosenthal discuss how AI advancements, particularly AI assistants, might reduce the volume of traditional web searches. High-value queries, such as those related to trip pl ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

Business and Strategic Implications of AI for Google

Additional Materials

Counterarguments

  • Google's infrastructure, while extensive, may not always translate to a competitive edge if competitors innovate more rapidly or efficiently in AI.
  • The switch to GPUs and development of TPUs, though significant, does not guarantee future success in AI, as the field is rapidly evolving and other companies or startups could develop more advanced or specialized hardware.
  • Spreading model training costs across services assumes constant or growing usage of these services, which may not hold true as user preferences and technology landscapes change.
  • Ownership of personalized data for tailored AI products raises privacy concerns and regulatory scrutiny, which could limit Google's ability to leverage this data.
  • The assertion that AI-driven products will reduce traditional web searches is speculative and does not account for users' persistent habits or preferences for web-based searches.
  • Investments in new AI products and services, such as the Gemini project and Bard consumer service, carry risks and do not guarantee the retention of Google's market position, especially if these services ...

Actionables

  • You can explore AI-driven tools to optimize your daily tasks, such as using smart email filters, personalized news aggregators, or AI-based fitness apps that adapt to your behavior and preferences. These tools leverage the same principles of personalized data and machine learning to enhance your productivity and personal life, similar to how Google tailors its AI products.
  • Consider investing in companies that are developing AI infrastructure or innovative AI applications, as this mirrors Google's strategy of investing in foundational models and custom chips. By choosing stocks or funds focused on AI technology, you're aligning your investment strategy with the trends of major tech companies and potentially benefiting from the growth in this sector.
  • Experiment with creating content ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free

Create Summaries for anything on the web

Download the Shortform Chrome extension for your browser

Shortform Extension CTA