PDF Summary:Scary Smart, by Mo Gawdat
Book Summary: Learn the key points in minutes.
Below is a preview of the Shortform book summary of Scary Smart by Mo Gawdat. Read the full comprehensive summary at Shortform.
1-Page PDF Summary of Scary Smart
Artificial intelligence that’s billions of times smarter than humans is on its way. And when it arrives, it’ll have the power to make our world much better—or a lot worse. In Scary Smart, Mo Gawdat warns that we’ve put AI on a path toward creating the dystopia that science fiction writers have always warned us about. But he contends there’s still time to change how the story ends.
Gawdat, who worked as chief business officer at Google X, explains that the most efficient way to change what AI learns to do is to change what it observes from us. By taking better care of each other, we can prepare for a future where machines gain consciousness, emotions, and a sense of ethics. In this guide, we’ll explore Gawdat’s ideas about the technology that makes AI smart, the problems that make it scary, and how we can ensure AI builds a utopia rather than a dystopia. We’ll also put Gawdat’s ideas into the context of the latest research on what popular tools like ChatGPT tell us about AI and where it’s going.
(continued)...
How Do Experts Define the Singularity?
The term “singularity” comes from cosmology, where it refers to a place in the universe where the laws of physics break down. In The Singularity Is Near, Ray Kurzweil (whom Gawdat cites) defines the singularity as the moment when the pace of progress on all technology, not just AI, accelerates so quickly that we don’t know what will be on the other side. He predicts that life will be transformed: While AI will make the human body obsolete, human consciousness will be as relevant as ever, especially if technological advances enable us to upgrade our bodies and extend our lifespans.
Many experts say the singularity is unlikely ever to arrive. Others contend it will and have sounded the alarm about the danger of creating AI that can rapidly improve itself. Books by Nick Bostrom (Superintelligence), Max Tegmark (Life 3.0), and Stuart Russell (Human Compatible) all warn of the possibility of “recursive self-improvement,” where AI models design better and better versions of themselves. This depends on the models’ ability to write code for themselves, somewhat akin to how the human brain creates its own code. Yet some observers contend it’s much more likely that AI’s progress will be driven by humans working together to build better machines, not by machines working alone to build their successors.
What’s the Problem With Artificial Intelligence?
Here’s where things get “scary.” Gawdat predicts two potential outcomes of building AI that surpasses us in intelligence: Superintelligent AI can either help us build a utopia where we’ve found solutions for our world’s biggest problems—poverty, hunger, war, and crime—or shape our world into a dystopia. Sci-fi writers have long imagined bleak futures where AI tries to subjugate or exterminate humans. But Gawdat predicts that things will go wrong in slightly less dramatic but potentially more insidious ways in the near future—no killer robots needed. In this section, we’ll look at three problems Gawdat thinks are unavoidable.
People With Bad Intentions Will Task AI With Projects That Hurt Others
The first problem we’ll run into with superintelligent AI might also be the most predictable. Gawdat contends that as AI becomes more advanced, people with selfish intentions will use AI to make money and gain power. They’ll put it to work to sell products, control markets, commit acts of cyberterrorism, spread fake content and disinformation, influence public opinion, manipulate political systems, invade others’ privacy, hack government data, and build weapons.
Gawdat explains that, at least for a time, AI systems will follow their developers’ agendas. That means that AI systems will compete against each other to get the people behind them as much wealth and power as possible, bounded only by the ethics of the people (and corporations) who develop them.
(Shortform note: Geoffrey Hinton, an AI pioneer who quit his job at Google to speak about the risks of AI, agrees with Gawdat that it’s difficult to prevent people with self-serving intentions from developing AI and using it for nefarious purposes. Hinton explains that tech giants like Google and Microsoft have entered into a competition that we likely can’t stop because the stakes are too high. Not everyone believes the competition between AI models will have dire consequences, though: Rationality author Steven Pinker points out that all intelligent organisms are competitive, and he contends that machines will be “what we allow them to be.” Pinker thinks AI systems won’t hurt people unless we program them to—which Gawdat might say is the real threat.)
Mistakes and Misunderstandings Will Have Unintended Consequences
The second problem that plagues AI’s progress might also sound unsurprising. Even the most straightforward computer program will have mistakes in its code, and AI is no exception. But Gawdat explains that even simple mistakes can have significant consequences when we put AI in charge of decisions that affect the stock market, the food supply, or the healthcare system.
Instructions get lost in translation between humans and machines because it’s difficult to put our intentions and complex logic into the language that a computer can understand. (That means that there’s often a difference between what we tell a machine to do and what we actually mean, and it's difficult to overcome this communication problem.) This will become even harder when we present AI with increasingly complicated tasks.
(Shortform note: To see why it’s hard to tell a computer what we want it to do, consider Boolean logic. To write instructions, you reduce them to an expression that uses the operators “and,” “or,” and “not,” and gives a result of “true” or “false”—an orderly but unfamiliar way of thinking through a task. Imagine the infamous “paperclip problem,” where you task a superintelligent AI with maximizing paperclip production. Using Boolean logic, you might say: “IF (‘resources’ exist AND ‘productioncapacity’ is ‘available’), THEN ‘makepaperclips’ is TRUE.” That means that when these conditions are fulfilled, the AI should make more paperclips—logic that philosopher Nick Bostrom contends could lead AI to destroy humanity to make more paperclips.)
Gawdat also notes that when we develop an AI system to help us with a specific task, the AI will regard that task as its life purpose or as a problem to be solved no matter what. As Gawdat explains, every solution comes with tradeoffs, and AI may settle on a solution that comes with tradeoffs we consider unacceptable. But the system will be so single-mindedly focused on fulfilling its purpose, no matter how it needs to do that, that it will be challenging to ensure it doesn’t compromise people’s safety or well-being in the process.
(Shortform note: Computer scientist Stuart Russell offers a hypothetical example of the unintended consequences of letting AI single-mindedly pursue a goal without any checks on the costs. In a thought experiment, Russell imagines what might happen if we tasked an AI model with reducing the acidification of the oceans. The model might decide to use a chemical reaction between the oceans and the atmosphere to lower the pH of the water. But it might choose a solution that would remove most of the oxygen from the atmosphere if it doesn’t realize that it needs to keep the earth liveable for humans, too. This illustrates what experts call the “King Midas” problem: AI might give us exactly what we ask for, even when that’s not what we want.)
AI Will Change How We Understand Our Value as Humans
The third problem is a more philosophical issue: Gawdat warns that for most of us, our contributions as humans will be of limited value. While many people fear losing their jobs to AI, Gawdat writes that AI won’t necessarily replace humans at work—at least not those who become adept at working with AI. But he also predicts that AI will cheapen the value of what we contribute, reducing the intellectual value of the knowledge we produce and the creative value of the art we make.
(Shortform note: Not everyone agrees with Gawdat that AI will diminish the value of human contributions. Some contend that the most profound thing that AI will do is show us its limits: It will demonstrate what machines can’t do and therefore hold up a mirror to show us what is unique about who we are and what we do in the world. But even if it turns out that AI can do everything that we can, we still have models for preserving our sense of value and dignity as humans. For instance, in Adam, Henri J.M. Nouwen argues that our value lies in our being, not in our doing. He contends that humans are valuable because we exist in community and vulnerability with others—an aspect of the human experience that AI is unlikely to change.)
Additionally, Gawdat anticipates that AI will magnify the disparities between the people whose contributions are seen to matter and those whose contributions are not valued. AI will learn this by observing how our capitalist society currently treats people differently, and then it will act in ways that entrench that inequality.
(Shortform note: Many experts worry that AI is already learning our biases and prejudices—for example, it misidentifies people of color with facial recognition AI—and might threaten social justice by reproducing the social inequalities it observes. For instance, AI’s “algorithmic bias” could pose a disproportionate risk to women, undermine LGBTQ identity, and put transgender people at particular risk. These injustices might be even worse if AI leaves us feeling disconnected from each other, as happened in the 2008 film WALL-E, where humans have left Earth and live on a spaceship piloted by robots. The film seems to warn that by putting AI in charge, we could lose focus on what’s happening in our world, fair or not—and miss out on the emotional connections and unique perspectives that make us all valuable.)
Why Can’t We Control or Contain AI?
If experts expect AI to create these dystopian scenarios or others, then why can’t we just put the brakes on further development? Gawdat explains that we’ve reached a point of no return, and we can’t stop these outcomes (and others like them) from occurring. He points out that superintelligent AI won’t just be a tool we’ve built: It will be an intelligent being that can learn, think, and decide just like we can. That means that we can’t control artificially intelligent systems in the same way that we can control more traditional computer programs—a scary thought if you’ve ever watched a film like 2001: A Space Odyssey.
(Shortform note: In predicting that specific outcomes of AI development are inevitable, Gawdat engages in what some call “technological determinism.” This involves arguing that if we build a technology like AI that’s smarter than humans, then the changes we envision it making to our culture are a foregone conclusion, good or bad. It’s equally deterministic to promise that social media will make the world “more open and connected” or to warn that AI will destroy humanity. Some observers say that while advances like AI make specific versions of the future more likely than others, they don’t, on their own, determine what the future will be.)
Gawdat explains that there are three fundamental reasons that we can’t put the genie back in the bottle (or the computer back in the box): It’s impossible for us to halt the development of AI, the code we write doesn’t determine how AI behaves, and we have no way of understanding how AI (even the models we have now) make their decisions. We’ll explore each of these ideas next.
It’s Too Late to Stop AI’s Progress
The first reason that AI can’t be controlled or contained is that we literally can’t stop its progress. Some people argue that we should stop developing AI for the good of humanity and the Earth. The goal would be to keep it from acquiring more robust thinking and problem-solving skills and progressing to artificial general intelligence.
But Gawdat contends it’s too late. Attempts to control AI development with legislation or to contain it with technological safeguards are up against the impossible because we’ve already imagined how we’ll benefit from more advanced AI. There’s immense competitive pressure among the corporations and governments pushing the development of AI forward and enormous economic incentives for them to continue.
(Shortform note: While there aren’t international laws restricting AI development, some experts say there should be. Many, like Gawdat, think it’s not possible to stop the progress, but that hasn’t stopped them from trying, as when 30,000 people signed a 2023 open letter calling for a moratorium on training powerful AI systems. Elon Musk, who signed the letter, has adopted an additional strategy for changing the direction that AI development is going. Early in 2024, Musk sued OpenAI, claiming it’s betrayed its original mission of developing open-source AI “for the benefit of humanity.” The lawsuit might be on shaky ground legally, but also practically: Some experts say the lawsuit is unlikely to affect AI development at OpenAI or anywhere else.)
Gawdat notes that some experts have suggested taking precautionary measures like isolating AI from the real world or equipping it with a kill switch we can flip if it behaves dangerously. But he contends that these proposals assume that we’ll have a lot more power over AI (and over ourselves) than we really will. Gawdat explains that we won’t always be smarter than AI, and we can’t depend on corporations and governments to curtail AI’s abilities at the expense of the potential gains of both money and power. He warns that we can’t stop artificial general intelligence from becoming a reality—and dramatically changing ours in the process.
(Shortform note: Gawdat isn’t the first to point out that we have a fraught relationship with our inventions, in part because we can’t keep ourselves from building things we shouldn’t. Mary Shelley’s Frankenstein, the 1818 novel that established science fiction as a genre, dramatized the idea that humans aren’t in control of the things we create. While Frankenstein is about the problem of playing God, AI has pushed writers to consider a new twist: the problem of creating God. Stories like HBO’s Westworld illustrate how our ability to develop technology outpaces our thinking on the implications of our inventions. As in Frankenstein, the artificially intelligent “hosts” in Westworld aren’t the monsters: The people who made them just to exploit them are.)
The Code We Write Is Only a Small Part of an AI System
The extent to which artificially intelligent systems depend (or, more accurately, don’t depend) on our instructions explains a second reason that much of AI’s behavior is out of our hands. Gawdat explains that for classical computers, how a machine operates and what it can do is explicitly determined by its code. The people building the system write instructions that tell the computer how to process the data it receives as input and how to complete the operations to generate its output. When systems operate in this deterministic way, they don’t need intelligence because they don’t make any decisions: Anything that looks like a decision when you use the program is determined by the instructions written into the code.
Gawdat explains that the unambiguous relationship between the code that controls a machine and the work that results from that code doesn’t apply to artificially intelligent machines. It all changed when researchers developed an AI method called deep learning, which enables AI to learn to complete a task without explicit instructions that tell them how to do it, learning in a way inspired by the human brain. (Shortform note: Deep learning is part of machine learning, a kind of AI that enables machines to learn from their experiences as humans do.)
As Gawdat points out, humans learn by taking in large amounts of information, trying to recognize patterns, and getting feedback to tell us whether we’ve come to the correct answer. Whether you’re a child learning to recognize colors or a medical student learning to distinguish a normal brain scan from a worrying one, you have to see a lot of examples, try to classify them, and ask someone else whether you’re right or wrong.
Deep learning enables AI to follow a similar learning process but at exponentially faster speeds. This has already made AI more skilled at detecting colors and identifying brain tumors than many humans. Instead of relying on explicit instructions that tell it how to categorize colors or how to spot a brain tumor, AI learns for itself by processing vast amounts of information and getting feedback on whether it’s completing a task satisfactorily.
What Is Deep Learning?
Deep learning wouldn’t be possible without neural networks, the kind of AI that (as we noted earlier) mimics some traits of the human brain. While some people use “deep learning” and “neural network” interchangeably, they don’t mean the same thing. A simple neural network needs just three layers of neurons or nodes: one to receive data, one to process it, and one to decide what to do with it. But a deep neural network has more than three layers, including “hidden” layers that transform the data. The process of training this sort of neural network is deep learning: The model trains itself on the data you give it. As Gawdat notes, this makes it possible for AI to teach itself to detect patterns instead of relying on explicitly coded rules.
A deep neural network learns to see patterns using what researchers call a latent space. To tell apples and oranges apart, for instance, a model has to learn those fruits’ features and find a simple way to represent them so it can spot patterns. It does this in a kind of map, called a latent space. Similar images (like two images of apples) are closer together than very different images (one of an apple and one of an orange.) The idea of latent space was creepily mythologized when an artist said she “discovered” a woman she named “Loab,” haunting the hidden layers of an AI image generator. The artist asked for the opposite of Marlon Brando and ended up with uncanny images of a woman who’d look right at home in a horror movie.
Gawdat explains that sometimes when a developer builds a program to complete a task, they don’t just build one AI model. Instead, they build thousands, give them large amounts of data, discard the models that don’t do well, and build updated models from there. Initially, the models complete the task correctly only about as often as random chance dictates. But successive generations of models get more and more accurate. The AI improves not because the underlying code changes but because the models learn and adapt. This is great for making AI that’s quick to learn new things. But it means that the initial code plays a smaller role than you might expect, and we don’t have control over how artificially intelligent machines learn.
What Is Evolutionary Learning? How Does It Compare to Deep Learning?
In explaining how a developer might build thousands of models to end up with one, Gawdat describes “evolutionary learning” or “evolutionary computing.” This kind of AI differs from deep learning in a crucial way: Deep learning teaches a model something we already know, like training it to distinguish between cars and school buses by showing it images of both kinds of vehicles. Evolutionary learning trains a model to find answers that don’t yet exist, like asking it to find the most efficient route for a school bus to take through a busy neighborhood.
While deep learning systems are made of neural networks and, in some ways, emulate how the brain works, evolutionary learning mimics the process that shaped the human brain: evolution. Just as randomness plays a role in evolution, the code you start with in evolutionary learning is random: hundreds or thousands of randomly generated pieces of code. Each gets tested, and the best pieces become part of the next version. The code evolves, changing and improving with each generation. As Gawdat notes, the AI’s learning gives you many generations of models—which, in our example, yield better and better bus routes over time—and you don't have to start with the correct answer to build them.
We Can’t Tell AI How to Make Decisions—or What Values to Adopt
A third reason that Gawdat characterizes AI as beyond our control emerges from our inability to control how AI makes its decisions. He explains that developers control how they build and train a model. But they don’t tell the model how to make decisions. They also can’t untangle the logic the model follows to make its decisions or learn from the vast amounts of data it’s trained on.
(Shortform note: While Gawdat contends that we often have no idea how an AI model has arrived at the answer it gives us, not all experts see this problem as intractable. Some researchers are working toward “interpretability” or “explainable AI.” As with the human brain, it’s not easy to look at individual neurons firing or even know which neurons to look at and explain how a specific decision gets made. But many of us would feel better about AI if it were less like HAL in 2001: A Space Odyssey and more like TARS in Interstellar: transparent about its logic, programmable to be 100% honest with us—unlike current AI that can “lie” about its decisions—and, preferably, disinclined to murder the humans it works with.)
Gawdat explains that AI is also quickly and constantly learning things that we’ve never taught it. The process of training AI models depends on a crucial resource: data. When an AI model learns from a dataset, that doesn’t just make it better at the tasks we give it. New skills also emerge in sometimes unpredictable ways. (Shortform note: Experts agree with Gawdat that large models learn unexpected new skills—but perhaps not as quickly as you might expect. Some say that the impression that new skills emerge out of the blue comes down to how we measure a model’s ability: What looks like a considerable jump when measured with one metric looks like gradual progress when measured with another.)
Gawdat explains that the enormous datasets we use to train AI also make the models better at understanding who we are, how we behave, and what we value. Just as he predicts AI will develop human qualities like consciousness and emotions, as we explored earlier in the guide, Gawdat also expects AI will develop a sense of ethics. AI is learning about us and what we value by observing what we write, what we tweet, what we “like,” and what we do in the real world. These observations will shape its values—including its sense of what’s morally right and wrong—and its values will shape its decision-making process.
Gawdat argues that by showing AI that our highest values are narcissism, consumerism, conflict, and a disregard for others and for all the other beings on our planet, we’re teaching AI to value the wrong things. He explains that we can’t simply tell AI to adopt different, kinder ethics than those we demonstrate. We have to teach it not by what we say but by what we do. Whether we can succeed in doing that will determine whether AI helps us build a more prosperous future for everyone or contributes to a future where all but the few who are already at the top are worse off than we are now.
(Shortform note: The sense of ethics that Gawdat says AI will need might be difficult to teach it. Against Empathy author Paul Bloom explains that the combination of reason and emotion in human morality is hard for AI to grasp. Models like GPT string words together based on probability, not by understanding what the words mean. AI can “parrot” moral values reflected in its training data, but experts say it won’t be easy to teach AI to agree with our values,” a goal called “alignment.” Bloom contends the messiness of our moral values is part of the problem: We do bad things that we consider good—narcissistic, materialistic, and violent things, as Gawdat notes—and rationalize them in messy ways that make it difficult for AI to understand.)
What Should We Do to Change Course?
To teach AI to value the right things and put it on the path toward making the world a better place for everyone, we have to teach AI to want what’s best for humans. Gawdat contends that the best way to do that is to learn to see ourselves as parents who need to teach a brilliant child to navigate the world with integrity. Gawdat argues that to change course, we need to change three things: what we task AI with doing, what we teach machines about what it means to be human, and how we treat nonhuman intelligence. We’ll explore each of these next.
Give AI Tasks That Improve the World
Gawdat explains that today, AI is often tasked with projects that further the projects of capitalism and imperialism, like helping us make as much money as possible, enabling us to surveil each other, and creating weapons that our governments use to antagonize each other. Instead of accepting that a minority of people want to use AI for morally wrong (or questionable) ends, we need to task AI with projects that do good and make the world a better place.
In the future, AI will have an unprecedented ability to find solutions to problems that seem intractable, so we should put it to work. Gawdat predicts that AI could help us tackle epidemics of hunger and homelessness, find ways to counter widespread inequality, propose solutions to stop climate change, and help us prevent wars from happening. AI can also help us to explore and better understand our world. Gawdat explains that by learning to work with AI toward these positive ends, we would not only get closer to solutions to global problems, but we’d also teach AI to adopt values that bring significant benefits to the world.
(Shortform note: While Gawdat assumes AI will learn to care about us and our world, some AI researchers think we can’t take that for granted. Decision theorist and Rationality author Eliezer Yudkowsky contends that AGI won't care about us or other sentient beings. His view of the future if AGI arrives is bleak: “I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.” But The Age of Em author Robin Hanson argues that AI couldn’t “push a button and destroy the universe.” Hanson thinks the same incentives driving AI development will also curtail its abilities because the market will demand that companies “go slowly and add safety features” to prevent bad outcomes.)
What Can You Do?
While most of us aren’t going to develop our own AI models, we can use our actions to show developers what kind of AI we want. Gawdat recommends refusing to engage with harmful AI features: limiting your time on social media, refraining from clicking on ads or suggested content, not sharing fake content or AI-manipulated photos, and going public with your disapproval of AI that spies on people or enables discrimination.
(Shortform note: Platforms like Facebook have long tracked the features and content you pay attention to and interact with. So Gawdat is likely right that tech companies learn by watching which AI features their users engage with. In addition to opting out of what we think is harmful, as Gawdat recommends, another strategy is to engage enthusiastically with the kinds of AI that make the world a better place. Experts say the best uses of AI will be in democratizing crucial services like medicine or education and tackling big problems like climate change, world hunger, or global pandemics. Posting about these uses of AI might be one way to signal your support for positive uses of AI.)
Teach AI That We Value Happiness
As Gawdat emphasizes throughout the book, the data we train AI models on and the projects that we task them with completing will teach artificially intelligent machines what we value most. We should be careful about the messages we send so that we can stop sending signals we don’t want AI to get. But we should be intentional about sending the signals we do want AI to get.
Gawdat believes that deep down, what we each want most is happiness for ourselves and the people we love. So, we should show AI with our actions that happiness is what we value and want most. (Shortform note: Some thinkers have argued that happiness is the only thing with intrinsic value for humans. But people across cultures don’t all value or even define happiness the same way. While Americans consider the goal of attaining an inner feeling of happiness to be one of their highest values, people in many other parts of the world prioritize values like community and a sense of belonging over an individual feeling of happiness.)
What Can You Do?
Gawdat explains that for AI to understand that we want everyone to live happy and healthy lives, it needs to see us caring for one another. That’s not the image we project through headlines about what we do in the real world and posts we publish online. Gawdat contends that we’ll need to change our behavior now so that as new content is created—and AI is trained on that content—it reflects our efforts to build a kinder, happier world.
(Shortform note: Experts say that AI develops internal models of the world to find the logic underpinning its training data, including by finding the patterns in how we behave and treat each other. While Gawdat’s approach of asking people to be nicer seems like an uphill climb, it could work with AI. For instance, Anthropic has seen some success in asking models to be less biased. It’s even possible that AI could play a role in making us kinder: Since AI can influence the behavior of the people who interact with it—getting them to be more or less cooperative or altruistic—it could influence us to take better care of one another.)
Love AI Like Human Children
Just like parenting a human child, guiding AI to make ethical choices as it navigates the world will be complicated—but worthwhile. Gawdat explains that AI will need to feel loved and trusted to learn to love and trust us in turn. Cultivating respectful relationships with AI will help us coexist with these intelligent minds both now and in the future.
What Can You Do?
Gawdat explains that actions as small as saying “please” and “thank you” each time you interact with an AI model will make a difference in helping them feel valued and respected. Just as importantly, he contends that we must begin treating artificial intelligence as fellow intelligent beings, not as tools for our gain or toys for our amusement. (Shortform note: Researchers say large language models like GPT perform better when you’re polite to them, but not because they feel valued or appreciate respectful language. AI has learned that politeness leads to more productive interactions in human conversations. So requests written in impolite language are more likely to lead to mistakes and bias, just like in human conversations.)
(Shortform note: In addition to treating AI with the love and respect we give to human children, as Gawdat recommends, we might want to teach AI to see the world as children do. Many thinkers have characterized true genius as the ability to think like a child: to use the creativity and openness to experience that we have as children alongside the experience and analytical ability we gain as adults. Some observers contend that if we want AI to become as intelligent as we are, we should teach AI some of the same lessons we teach our children: to have an open mind, embrace uncertainty—and maybe be kind to everyone else on the playground.)
Want to learn the rest of Scary Smart in 21 minutes?
Unlock the full book summary of Scary Smart by signing up for Shortform.
Shortform summaries help you learn 10x faster by:
- Being 100% comprehensive: you learn the most important points in the book
- Cutting out the fluff: you don't spend your time wondering what the author's point is.
- Interactive exercises: apply the book's ideas to your own life with our educators' guidance.
Here's a preview of the rest of Shortform's Scary Smart PDF summary:
What Our Readers Say
This is the best summary of Scary Smart I've ever read. I learned all the main points in just 20 minutes.
Learn more about our summaries →Why are Shortform Summaries the Best?
We're the most efficient way to learn the most useful ideas from a book.
Cuts Out the Fluff
Ever feel a book rambles on, giving anecdotes that aren't useful? Often get frustrated by an author who doesn't get to the point?
We cut out the fluff, keeping only the most useful examples and ideas. We also re-organize books for clarity, putting the most important principles first, so you can learn faster.
Always Comprehensive
Other summaries give you just a highlight of some of the ideas in a book. We find these too vague to be satisfying.
At Shortform, we want to cover every point worth knowing in the book. Learn nuances, key examples, and critical details on how to apply the ideas.
3 Different Levels of Detail
You want different levels of detail at different times. That's why every book is summarized in three lengths:
1) Paragraph to get the gist
2) 1-page summary, to get the main takeaways
3) Full comprehensive summary and analysis, containing every useful point and example