Podcasts > Lex Fridman Podcast > #447 – Cursor Team: Future of Programming with AI

#447 – Cursor Team: Future of Programming with AI

By Lex Fridman

In this episode of the Lex Fridman Podcast, the team behind Cursor shares their vision for AI-powered programming. They discuss how Cursor aims to create a harmonious hybrid of human and AI capabilities, allowing programmers to effortlessly navigate complex systems. The podcast explores Cursor's key technical features, such as predicting and applying code edits and a visual diff interface.

The team also shares insights into AI's transformative impact on programming workflows. They believe AI will serve as a powerful assistant, handling tedious tasks so programmers can focus on high-level design and rapid iteration. However, the integration of AI into programming environments presents challenges, including scaling, infrastructure, security, and privacy concerns, which the team is actively tackling.

Listen to the original

#447 – Cursor Team: Future of Programming with AI

This is a preview of the Shortform summary of the Oct 6, 2024 episode of the Lex Fridman Podcast

Sign up for Shortform to access the whole episode summary along with additional materials like counterarguments and context.

#447 – Cursor Team: Future of Programming with AI

1-Page Summary

The Vision Behind Cursor

Cursor's Origins

Cursor emerged in response to the rapid advancements in AI, particularly large language models (LLMs). The team observed that while tools like GitHub Copilot were groundbreaking, they only scratched the surface of AI's potential to transform programming workflows.

A Human-AI Hybrid System

Cursor aims to cultivate the "engineer of the future" - a sophisticated hybrid of human and AI capabilities. According to the team, being ahead in AI-powered programming by even a few months translates to exceptional productivity gains. Their goal is for programmers to effortlessly navigate complex systems by harmonizing human ingenuity with AI's capabilities, outperforming even pure AI systems.

Cursor's Key Technical Features

Predicting and Applying Code Edits

Cursor's "Tab" feature uses custom AI models to predict and apply entire code edits, going beyond basic autocomplete. The team aims to eliminate repetitive coding tasks, allowing programmers to focus on high-level intent while the AI handles low-level details.

Sualeh Asif and Aman Sanger discuss how "Tab" extends to multi-line edits, file navigation, and terminal command suggestions, enabled by techniques like speculative edits and multi-query attention.

Visual Diff Interface

Cursor integrates a diff interface that visually highlights the AI's proposed code changes for easy review and acceptance. The team has experimented with UI approaches like side boxes and color-coding to streamline the review process.

Arvid Lunnemark notes the "verification problem" with large edits, and the exploration of solutions like highlighting important changes and providing model feedback via a shadow workspace.

AI's Transformative Impact on Programming

AI as a Powerful Assistant

The team believes AI will fundamentally transform programming by serving as a powerful assistant, not by replacing human programmers. AI is expected to handle tedious tasks so programmers can focus on both high and low-level aspects of their codebases.

Lunnemark suggests programming with AI may involve visual aids rather than just natural language. Sanger notes models that have seen code can pave the way for deeply integrated AI assistance.

Evolving Programmer Skills

While programmers will maintain control, their skills may evolve, with less emphasis on boilerplate code and more on high-level design and rapid iteration. Lunnemark envisions programmers quickly making major codebase changes with AI assistance.

A passion for coding and eagerness to experiment are seen as key traits for programmers in this future AI-integrated environment. However, programming will remain a highly valued and rewarding career.

Scaling and Infrastructure Challenges

Large Codebase and User Scaling

Cursor faces scaling obstacles like efficiently storing and indexing large codebases, managing memory usage, and optimizing for low latency. The team has built custom solutions like Merkle trees and caching mechanisms.

Sanger also discusses hitting a data wall and considering scaling test time compute instead of model size.

Real-Time AI Integration

Integrating language models, AI components, and real-time interactive features requires extensive engineering work, like building custom language servers and incorporating AI components without impacting performance.

Security and Privacy Concerns

Potential solutions like homomorphic encryption could enable privacy-preserving AI programming. However, centralization risks and the need to prevent model misuse raise security concerns that must be navigated carefully.

1-Page Summary

Additional Materials

Clarifications

  • Large language models (LLMs) are advanced computational models used in natural language processing tasks like language generation. They learn patterns from extensive text data during training to understand and generate human-like language. LLMs, based on transformer architectures, excel in processing and generating vast amounts of text efficiently. These models can be fine-tuned for specific tasks and are capable of understanding syntax, semantics, and structures in human language.
  • GitHub Copilot is a code completion and automatic programming tool developed by GitHub and OpenAI. It assists users by autocompleting code in various integrated development environments (IDEs) like Visual Studio Code and JetBrains. GitHub Copilot uses generative artificial intelligence to suggest code snippets based on the context of the code being written. It was first announced in June 2021 and is designed to help developers write code more efficiently and effectively.
  • Multi-query attention is a mechanism in artificial intelligence that allows a model to consider multiple queries simultaneously when processing information. It enhances the model's ability to capture complex relationships within the data by attending to different aspects of the input. This technique is commonly used in tasks like natural language processing and enables the model to focus on relevant information from various perspectives. By incorporating multiple queries, the model can better understand and process intricate patterns in the data, leading to improved performance in tasks that require nuanced comprehension.
  • A Merkle tree, named after Ralph Merkle, is a tree data structure where each leaf node is labeled with the cryptographic hash of a data block, and non-leaf nodes are labeled with the hash of their child nodes. It enables efficient and secure verification of large data structures by computing a minimal number of hashes. Merkle trees are commonly used in hash-based cryptography to verify data integrity and authenticity, especially in scenarios like peer-to-peer networks where data needs to be validated without trusting the source.
  • Homomorphic encryption is a type of encryption that allows computations to be performed on encrypted data without the need to decrypt it first. This enables data to be processed securely while remaining encrypted, preserving privacy and security. It is particularly useful for scenarios where sensitive data needs to be analyzed or processed by third parties without compromising confidentiality. By enabling operations on encrypted data, homomorphic encryption helps protect information during processing and analysis.
  • Centralization risks in the context of technology and data management typically refer to the potential dangers associated with consolidating control, access, or decision-making power within a single entity or system. These risks can include issues like single points of failure, lack of redundancy, increased vulnerability to cyber threats, and challenges in ensuring accountability and transparency. Centralization risks are often weighed against the benefits of efficiency, uniformity, and ease of management that centralized systems can offer. Strategies to mitigate centralization risks may involve implementing safeguards such as distributed systems, encryption protocols, access controls, and governance frameworks.

Counterarguments

  • The assumption that AI will not replace human programmers might be overly optimistic, as increasing capabilities could automate more complex tasks, potentially reducing the demand for certain programming skills.
  • While the "Tab" feature aims to eliminate repetitive tasks, it could also lead to a lack of understanding of the underlying code among new programmers, who might become overly reliant on AI suggestions.
  • The visual diff interface, while helpful, may not always accurately represent the impact of changes in code, potentially leading to oversight of important details that could cause bugs or other issues.
  • The idea that AI will serve as a powerful assistant assumes that all programmers will adapt to and accept this new paradigm, which may not be the case due to varying levels of comfort with AI tools and resistance to change in workflow.
  • The evolution of programmer skills towards high-level design and rapid iteration could marginalize those who excel at and prefer detailed, low-level programming work.
  • Scaling challenges, such as storing large codebases and optimizing for low latency, might not be fully solvable with current technology, leading to compromises in performance or functionality.
  • Real-time AI integration is complex and could introduce new points of failure into the programming environment, potentially making systems more fragile or unpredictable.
  • Security and privacy concerns might not be fully addressable with current technologies like homomorphic encryption, and the centralization of AI services could create significant vulnerabilities and dependencies.
  • The focus on AI assistance could inadvertently create a monoculture in programming approaches, stifling diversity in problem-solving and innovation.
  • The reliance on AI for programming assistance could also lead to a devaluation of traditional programming knowledge and expertise, potentially undermining the profession's foundational skills.

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
#447 – Cursor Team: Future of Programming with AI

The motivation and vision behind Cursor as a code editor powered by AI

Cursor, an innovative code editor powered by artificial intelligence (AI), was born out of a vision to fundamentally transform programming as we know it.

Cursor emerged as a response to the rapid progress in AI, particularly large language models

The team behind Cursor identified the rapid advancements in AI, especially in large language models (LLMs), as a sign that programming would be fundamentally changed. They observed that current AI-powered tools like GitHub Copilot were only scratching the surface of what could be achieved in terms of improving programmer productivity and coding workflows. With their sights set on the future, they sought to develop a tool that deeply integrates AI capabilities in a way that couldn't be confined by the limitations of plugins or extensions in platforms like VS Code.

The team's goal is to build the "engineer of the future" - a hybrid human-AI system

The Cursor team's ambition extends beyond creating another useful coding tool. Their vision for Cursor is to cultivate the "engineer of the future," a sophisticated hybrid of human and AI that outperforms a human programmer working alone. They imagine a workspace where programmers have ultimate control and can quickly iterate based on agile judgments to alter their codebases. This hybrid system will make programming feel effortless, with no wasted keystrokes, and the ability to navigate complex systems with ease. Cursor aims to empower programmers to harmonize their ingenuity with AI’s capabilities to not just match but outperform pure AI systems.

In AI, months ahead means light-years in efficiency

Cursor's founding team believes that AI's rate of innovation means that a few months of advancement can equate to years of efficiency gains. They maintain that being ahead in AI-powered programming, even by just a few months, translates to exceptionally enhanced productivity. This belief fuels their goal of continuous evolution, expecting that Cursor will render its current form obsolete within a year's time.

Michael Truell, a part of the Cursor team, likened the philosophy behind Cursor’s AI to that of a predictive autocomplete. He implied that the AI is engineered to predict and facilitate the next steps of the programmer, thus revolutionizing productivity. This suggests that Cursor’s AI aims t ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

The motivation and vision behind Cursor as a code editor powered by AI

Additional Materials

Clarifications

  • Large language models (LLMs) are advanced computational models used in natural language processing tasks like language generation. These models learn patterns and relationships from vast amounts of text data during training to enhance their understanding and predictive capabilities. LLMs are typically built using transformer-based architectures, allowing for efficient processing and generation of text on a large scale. They can be fine-tuned for specific tasks and are designed to capture syntax, semantics, and other linguistic nuances present in human language corpora.
  • GitHub Copilot is an AI-powered code completion tool developed by GitHub and OpenAI to assist developers in writing code more efficiently by providing suggestions and autocompletions based on the context of their code. It works best with languages like Python, JavaScript, TypeScript, Ruby, and Go, and is available as a subscription service for individual developers and businesses. GitHub Copilot aims to enhance programmer productivity by leveraging artificial intelligence to generate code snippets and improve coding workflows within popular integrated development environments (IDEs) like Visual Studio Code and JetBrains.
  • Predictive autocomplete in the context of Cursor's AI technology means the system anticipates and suggests the next steps in a programmer's code, enhancing productivity. It functions similarly to predictive text on mobile devices, offering suggestions to complete or improve the programmer's code. This feature aims to streamline the coding process by providing intelligent suggestions based on the context of the code being written. ...

Counterarguments

  • AI integration in coding tools might lead to over-reliance on technology, potentially diminishing the problem-solving skills of programmers.
  • The claim that a few months of AI advancement equate to years of efficiency gains may be overly optimistic and not account for the complexity of integrating such advancements into practical tools.
  • The vision of the "engineer of the future" may overlook the nuanced understanding and creativity that human programmers bring to problem-solving, which AI may not replicate or complement effectively.
  • There is a risk that tools like Cursor could introduce new types of errors or biases into the code, which could be harder to detect and rectify due to the opaque nature of AI systems.
  • The rapid obsolescence of technology, as suggested by the expectation that Cursor will render its current form obsolete within a year, could lead to a wasteful cycle of consumption and learning for programmers.
  • The idea that Cursor will outperform pure AI systems may not account for the possibility that AI technology could evolve to a point where it outpaces the hybrid human-AI model in efficiency and innovation.
  • The assertion that Cursor is more than just an advanced code editor might be marketing hyperbole until its real-world effectiveness and adoption can be evaluated.
  • The belief that p ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
#447 – Cursor Team: Future of Programming with AI

The technical implementation details of Cursor's key features

The technical team behind Cursor delves into the groundbreaking features of their AI programming tool, designed to revolutionize the coding experience by predicting and applying entire code edits.

Cursor's "Tab" feature aims to predict and apply entire code edits, going beyond basic autocomplete.

The team has introduced speculative editing with the Cursor "Tab" feature, aiming to predict and apply entire code edits. This involves custom models and techniques like speculative edits and multi-query attention to ensure the feature is fast and responsive. They aim to eliminate "low entropy" actions – predictable patterns or repetitive tasks in coding – to allow programmers to focus on the higher-level intent, leaving the AI to handle the tedious details.

Sualeh Asif and Aman Sanger discuss the "Tab" feature's capabilities, which extends beyond single-line edits to potential multi-line changes, jumping to different locations in the same file, and even suggesting terminal commands related to the code changes. The feature fast tracks the editing process by using speculative edits, allowing the model to generate tokens rapidly, even with larger batch sizes, thanks to the integration of multi-query attention. For low-latency execution, small models are employed, and sparse modeling using an MOE model has significantly improved performance with longer contexts.

Michael Truell emphasizes the goal of an ergonomic, smart, and fast editing experience, with the AI anticipating not just the next characters but entire coherent changes in the code. Aman Sanger adds that instruction fine-tuning and the creation of synthetic data for better responses also play a major role in Cursor's functionality. He also notes the importance of retrieval systems, re-ranking scores, and cache warming in making Cursor more intuitive and resource-friendly.

Cursor integrates a diff interface that visually shows the AI's proposed code changes, allowing for easy review and acceptance.

The Cursor team has experimented with various UI approaches to streamline the diff review process. The diff feature highlights the changes in proposed code, with different iterations focusing on intuitive and efficient review. A side box displays the codes to be deleted and added, while previous versions used color-coding to represent deletions and suggestions subtly.

Arvid Lunnemark addresses "the verification problem," ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

The technical implementation details of Cursor's key features

Additional Materials

Counterarguments

  • The "Tab" feature's ability to predict and apply entire code edits might not always align with the programmer's intent, leading to potential misunderstandings or the need for frequent corrections.
  • Eliminating low entropy actions could result in a loss of control or understanding of the codebase for programmers, as they might not be as involved in the detailed coding process.
  • The feature's suggestions for multi-line changes and terminal commands might not always be contextually appropriate, which could introduce errors or inefficiencies.
  • Rapid token generation with speculative edits may prioritize speed over accuracy, potentially compromising code quality.
  • The use of small models and sparse modeling might not capture the complexity of certain coding tasks, leading to oversimplified solutions.
  • Anticipating entire coherent changes in the code could be challenging in dynamically changing projects where requirements evolve frequently.
  • Instruction fine-tuning and synthetic data creation might not fully represent the diversity of real-world coding scenarios, possibly limiting the tool's effectiveness.
  • Retrieval systems, re-ranking scores, and cache warming strategies might not always yield the most relevant or optimal suggestions, especially in edge cases or less common programming patterns.
  • The diff interface, while helpful, might not always clearly convey the impact of changes on the overall codebase, especially in large projects with multiple dependencies.
  • The shadow workspace concept, although innovative, could introduce a disconnect between the model's suggestions and the actual codebas ...

Actionables

  • You can enhance your coding efficiency by setting up custom keyboard shortcuts that perform multiple actions in your text editor. For example, create a shortcut that comments out a block of code and then moves your cursor to the next function, mimicking the multi-line change and jump features mentioned.
  • Improve your code review process by using color-coded diff tools that are not integrated into your current setup. Experiment with different color schemes or plugins that make it easier to spot changes, similar to the diff interface described, to streamline your review process.
  • Experiment with creating your own set of speculative edits by keeping a ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
#447 – Cursor Team: Future of Programming with AI

The role of AI in the future of programming and coding workflows

As AI continues to evolve, the team discusses its expected transformative impact on programming, not by replacing human programmers but by amplifying their productivity and creativity.

The team believes AI will fundamentally transform programming, enabling new levels of productivity and creativity.

AI in programming is seen as a powerful assistant to the programmer. The idea is not to fully automate programming but to keep the human programmer in the driver's seat. AI is expected to handle tedious details, allowing programmers to focus on both high-level and low-level aspects of their codebase fluidly. The team expresses frustration that while models improve, the coding experience hasn’t changed much. They see AI-powered tools enabling new capabilities and experiences for programmers, enhancing productivity and the ability to use new features quickly.

Arvid Lunnemark points out that programming with AI’s help won’t always involve natural language. In some cases, showing an example or using visual elements might be more efficient. The foundational knowledge of code built up by AI during pre-training means it could prove useful for recognizing bugs or sketchiness in code.

Aman Sanger discusses models in pre-training that have seen code and can answer questions about it, paving the way for a programming future where AI’s assistance is vastly integrated. Michael Truell echoes this sentiment, foretelling that the best AI-driven products in the coming years will greatly surpass today’s standards. He suggests that AI has already made programming more enjoyable by reducing boilerplate and allowing faster, more controlled building.

Rather than fully automating programming, the goal is to keep the human programmer in the driver's seat, with AI serving as a powerful assistant.

Programming is set to become a strategic partnership between human direction and AI execution. The concept "pre-empt" assists in querying AI, considering the limited context space and external codebase information. Lunnemark says programmers should do what comes most naturally to them, with the system figuring out how to make their input make sense. Truell emphasizes that while AI can have a conversation about software building, humans should maintain control over the numerous micro-decisions involved in software design.

The team suggests that programmers might be able to edit pseudocode and see corresponding actual code changes, keeping them in control while gaining productivity. Lunnemark even posits that formal verification could take over testing, with AI suggesting specs for functions and computing proofs for implementation.

The skills and mindset required of programmers may evolve, with less emphasis on boilerplate and more on high-level design and rapid iteration.

Arvid Lunnemark envisions programmers quickly making significant codebase chang ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

The role of AI in the future of programming and coding workflows

Additional Materials

Clarifications

  • Formal verification is a method to mathematically prove that a program meets its specifications. It involves rigorous mathematical analysis to ensure the correctness of software. When formal verification takes over testing, it means that instead of relying solely on traditional testing methods, mathematical proofs are used to verify the correctness of the software. This approach can provide a higher level of confidence in the software's correctness but typically requires specialized skills and tools.
  • AI suggesting specs for functions involves artificial intelligence providing recommendations for the specifications or requirements of functions within a software program. This means AI can propose details like input parameters, expected outputs, and behavior for each function. Computing proofs for implementation involves AI verifying that the code implementation of a function meets the specified requirements or specifications through mathematical or logical proofs. This process helps ensure the correctness and reliability of the code.
  • Editing pseudocode and seeing corresponding code changes involves working with a simplified, human-readable representation of code that is easier to understand and modify. This process allows programmers to focus on the logic and structure of the program before translating it into actual code. By editing pseudocode, programmers can iteratively refine their algorithms and immediately visualize how these changes would translate into the actual programming language they are using. This approach can streamline the development process by providing a clear bridge between high-level design ...

Counterarguments

  • AI may not be able to handle all tedious details effectively, as some aspects of programming require nuanced understanding and decision-making that AI might not yet be capable of.
  • Over-reliance on AI-powered tools could lead to a degradation of fundamental programming skills among new programmers.
  • The prediction that AI will enhance productivity could be overly optimistic if the integration of AI into workflows introduces new complexities or dependencies.
  • AI's ability to recognize bugs or sketchiness in code might be limited to patterns it has been trained on, potentially missing novel or complex bugs.
  • Keeping human programmers in control may be challenging if AI systems become too complex for most programmers to understand or if they generate code that is difficult to interpret.
  • The idea of a strategic partnership between human direction and AI execution assumes that AI will seamlessly understand human intent, which may not always be the case.
  • Editing pseudocode and seeing corresponding code changes could oversimplify the programming process and might not always produce optimal or efficient code.
  • The assumption that formal verification could take over testing underestimates the complexity and subtlety of real-world software systems, where formal verificatio ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
#447 – Cursor Team: Future of Programming with AI

Challenges around scaling and infrastructure when building an AI-powered programming tool

As AI-powered programming tools gain popularity, teams like Cursor's face complex technical and infrastructure obstacles. These challenges include efficiently handling large codebases and growing user numbers, maintaining performance, and ensuring privacy.

Cursor faces significant technical challenges in scaling its systems to handle large codebases and growing numbers of users.

Caching, database overflows, and table scaling issues are some of the technical hurdles discussed by Michael Truell. To manage semantic indexing of codebases and answer related questions, Cursor built custom retrieval systems that are challenging to scale. Sualeh Asif underscores the difficulty of scaling the tool for companies with large, legacy codebases. Aman Sanger indicates hitting a data wall and considers improving models' performance by scaling test time compute instead of model size.

Issues like efficiently storing and indexing codebases, managing memory usage, and optimizing for latency have required custom solutions.

To tackle problems related to scalability, the team utilizes a Merkle tree structure. Sanger also mentions caching the actual vectors computed from the hash of given code chunks, which accelerates processing for additional users. Sualeh Asif mentions ongoing experiments to improve file suggestion accuracy. To enhance efficiency and reduce overhead, the team employs a KVCache for storing keys and values from the transformers' attention mechanism.

Building the infrastructure to support real-time, interactive AI-powered programming features is an ongoing challenge.

The integration of language models, AI components, and real-time interactive features necessitates considerable engineering work. Lunnemark discusses the use of language server protocol communication in Cursor, which interfaces with various language extensions vital for feedback to AI models. Running this protocol in the background requires nuanced engineering to ensure user experience is not negatively impacted.

Integrating language models, retrieval systems, and other AI components in a performant way requires significant engineering effort.

Challenges include incorporating AI components like automatic context, which could potentially slow down models and make them expensive. Cursor faces the challenge of handling large AI models that demand significant computational resources. Sanger talks about scaling issues with models of experts (MOEs), which could be prohibitive in size for local use on users' machines.

The team is exploring approaches like homomorphic encryption to enable privacy-preserving AI-powered programming in the future.

Privacy concerns are paramount with AI mo ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

Challenges around scaling and infrastructure when building an AI-powered programming tool

Additional Materials

Clarifications

  • A Merkle tree, named after Ralph Merkle, is a tree data structure where each leaf node is labeled with the cryptographic hash of a data block, and non-leaf nodes are labeled with the hash of their child nodes. It enables efficient and secure verification of large data structures by computing a number of hashes proportional to the logarithm of the number of leaf nodes. Merkle trees are commonly used in hash-based cryptography to ensure data integrity and authenticity, especially in scenarios like verifying data blocks in peer-to-peer networks.
  • A KVCache, short for Key-Value Cache, is a type of caching mechanism that stores data in a key-value format, allowing for quick retrieval of information based on unique identifiers (keys). It is commonly used to improve performance by reducing the need to repeatedly access slower storage mediums like databases. KVCache systems are efficient for storing and retrieving small pieces of data quickly, making them ideal for optimizing specific operations within a larger system. In the context of Cursor's AI-powered programming tool, the team uses a KVCache to store keys and values from the transformers' attention mechanism, enhancing efficiency and reducing processing overhead.
  • The Language Server Protocol (LSP) is a standardized communication protocol used between integrated development environments (IDEs) and language servers. It enables IDEs to provide advanced features like code completion, error checking, and refactoring by communicating with language-specific analysis tools. LSP helps in decoupling the development environment from language-specific tools, allowing for interoperability across different programming languages and IDEs. This protocol streamlines the development process by providing a common interface for IDEs to interact with lang ...

Counterarguments

  • While custom solutions are necessary for unique challenges, they can also lead to increased complexity and maintenance overhead, potentially making the system more brittle and harder to manage as it scales.
  • The focus on real-time, interactive features might come at the cost of neglecting other important aspects such as long-term maintainability, security, or even basic functionality.
  • Significant engineering effort to integrate various AI components might not always translate into proportional user value, especially if the features are not aligned with user needs or are too complex to be used effectively.
  • Handling large AI models and computational resources could be mitigated by leveraging more efficient algorithms or models that require less computation without significantly compromising performance.
  • The exploration of homomorphic encryption is a forward-thinking approach, but it might be impractical for widespread use in the near term due to its computational overhead and the current state of technology.
  • The concerns about security, privacy, and centralization risks might be overstated if proper data governance and user consent mechanisms are i ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free

Create Summaries for anything on the web

Download the Shortform Chrome extension for your browser

Shortform Extension CTA