PDF Summary:A Thousand Brains, by Jeff Hawkins
Book Summary: Learn the key points in minutes.
Below is a preview of the Shortform book summary of A Thousand Brains by Jeff Hawkins. Read the full comprehensive summary at Shortform.
1-Page PDF Summary of A Thousand Brains
Modern technological progress is ablaze with the potential of artificial intelligence, but we still don’t fully understand how natural, human intelligence works. If we could crack the code of human cognition, many scientists believe we could use that knowledge to create even more powerful and useful AI.
In A Thousand Brains, neuroscientist Jeff Hawkins presents a theory of intelligence rooted in how each component of the brain creates mental models and makes predictions, just as the brain does as a whole. Hawkins is known for his groundbreaking work in both mobile computing and neuroscience. In this guide, we’ll explore Hawkins’s theory that intelligence springs from a basic neural circuit in the brain, and we’ll describe how this model could be used to advance AI research. We’ll also look at alternate theories of the brain, while expanding on the practical applications of Hawkins’s theory and the dangers inherent in both human and machine cognition.
(continued)...
(Shortform note: At the time of Hawkins’s book’s publication, neuroscience studies revealed more about the mechanism by which individual neurons learn to predict the inputs they receive. In essence, each neuron tries to maximize its impact on other neurons while minimizing its own energy consumption. In terms of biochemical efficiency, each neuron aims to minimize the difference between its actual activity and its predicted activity. When the actual amount of future activity differs from its prediction, a neuron updates its synaptic connections to improve its predictions for similar inputs. By continuously updating their input predictions, neurons become better at anticipating future activity patterns and reducing their overall energy use.)
Models in the Mind
However many inputs they learn, no single neuron can create what we’d consider a coherent thought. That’s why Hawkins asserts that clusters of neurons with thousands of connections are needed to fully interpret your senses and generate your mind and body’s reactions. Working together, these clusters of neurons create models of your environment, decide what to do and think based on those models, and learn more information about the world when your current models prove insufficient.
Hawkins suggests that to make accurate predictions, cortical columns model objects and their positions in three-dimensional space using reference frames, which can be thought of like the grid lines on a map. His research shows that to create reference frames, neurons must be connected to both sensory input and motor output—it isn’t enough to see the world; your brain must be able to move through it. Different layers within a cortical column specialize in modeling objects and their positions separately—some neurons learn the shape of an object, while others determine where it exists. This modular setup lets the neocortex efficiently process and integrate information from your whole range of senses at once.
(Shortform note: Hawkins’s “reference frames” have been a staple of mathematics since René Descartes introduced the first x-, y-, and z-axis coordinate system in 1637. In the 1960s, computer pioneers began creating digital models of objects based on complex mathematical 3D reference frames, allowing users to interact with and manipulate imaginary objects on a computer screen just like your brain does when you think about an object and picture it in your own mind. Early computers didn’t “learn” 3D models—they were manually programmed by software engineers—but generative AI can now create 3D models based on simple text instructions, predicting how they interact and move, just as we do via models in the mind.)
These mental models form the foundation of all higher cognitive functions. Hawkins writes that as you interact with your environment, your neocortex continuously makes predictions based on your current mental models and compares its predictions to your sensory input. When predictions are accurate, they reinforce your existing models and strengthen their underlying neural connections. When predictions are wrong, your brain updates its models to better reflect reality. This constant cycle of prediction, feedback, and adjustment is the basis of learning, and according to the Thousand Brains theory, what’s happening on the brain’s macro level is actually taking place in each cortical column associated with a particular mental model.
(Shortform note: While Hawkins’s theory of learning via prediction and adjustment may seem straightforward, a certain amount of mental friction arises when our brains put it into practice. In Being Wrong, Kathryn Schulz explains that we’re programmed to assume our mental models are true, even in the face of insufficient or conflicting data. Confronting an error—or, an “incorrect prediction,” to use Hawkins’s term—elicits feelings of discomfort, remorse, and even shame. While our neurons may correct for inputs on the micro-level, when our large, complex mental models come under threat, our brains have a series of coping mechanisms akin to the stages of grief that they go through before correcting and adjusting their models of the world.)
Most of the predictions your brain makes are unconscious. Every time you move, such as by shifting your gaze or stepping into a room, your brain predicts what it expects to perceive. If nothing’s unexpected, you don’t notice anything—your cortical columns don’t light up and fire—but any surprises draw your attention and trigger your conscious brain to update its models. Hawkins says that since each cortical column contributes to hundreds of different mental models, they can switch between hundreds of different maps depending on your current sensory context. This flexibility allows your brain to navigate effectively through your ever-changing world.
(Shortform note: The process of learning by adapting mental models has been a mainstay of neuroscience even with older models of the brain. For instance, in Maps of Meaning, Jordan Peterson explains the same cycle in terms of left and right brain lateralization. He writes that as long as everything you encounter conforms to expectations, your brain’s “logical” left hemisphere is in charge. However, when the unexpected happens, the limbic system takes over, your senses become heightened, and your brain’s right hemisphere engages with its capacity for abstract, creative thought. In this interpretation, the right brain adjusts your mental models before passing them along to the left brain, which encodes them in language and logic.)
Abstract Thought
So far, the mental models we’ve discussed all pertain to physical spaces and objects, but how does the neocortex create and process ideas about things that we can’t see, taste, or touch? If Hawkins is correct that all your brain’s higher functions emerge from the models and predictions created by cortical columns, then the neocortex's models and reference frames must extend beyond representations of the physical world to enable abstract thought. These abstract reference frames must be more complex than those for physical objects while also functioning in much the same terms—those of creating, moving, and adjusting models of ideas in the mind.
Hawkins explains how the same basic circuitry that lets us navigate and interact with the world also underlies our capacity for language, mathematics, imagination, and conceptual reasoning. The key lies in the nature of reference frames. While reference frames for physical objects are three-dimensional, reference frames for abstract concepts have multiple dimensions, each representing a different aspect or degree of the feeling or idea. For example, emotions like happiness and sadness can be thought of as having dimensions such as intensity, duration, and type. Concepts such as human rights can be thought of as having boundaries and limits much like the contours of a physical object.
(Shortform note: Hawkins’s suggestion that we model abstract concepts in multidimensional reference frames aligns with research on the cognitive differences between people with opposing political views. Studies show that conservatives tend to focus on dimensions related to safety, stability, and authority, while liberals focus on dimensions of variety and complexity. These divergent reference frames, when applied to the same abstract concepts—such as freedom and democracy—lead to contrasting opinions on political issues. While there are other cognitive mechanisms at work, such as the power of partisan identity, understanding that people think in different reference frames might help improve understanding across political divides.)
To explain abstract thought, Hawkins proposes that when you think about an abstract concept, your neurons fire as if you're moving through an abstract reference frame, just as they do for models of the physical world. When you recall a memory—such as that of the house you grew up in—you mentally traverse the associated reference frame—in this case, the mental map of your old house—activating the corresponding neurons in the brain. Similarly, when you explore an abstract concept, such as the theory being presented in this guide, your brain constructs a reference frame that maps out its logical connections and arguments. You then navigate this mental map as you reason through the implications of the theory.
This process of building and traversing reference frames lets you understand and manipulate abstract ideas across various domains, from cooking and music to relationships and emotions. As you create new reference frames, they connect to and build upon your existing knowledge, allowing ideas to cross-pollinate, leading to the emergence of your unique, individual patterns of thought. Thus, Hawkins concludes that the neocortex's ability to construct and navigate multidimensional reference frames is the foundation of human intelligence and abstract reasoning.
Embodied Cognition
Hawkins’s idea that motion through reference frames (real or imagined) is essential to our capacity for abstract thought falls under an emerging scientific paradigm referred to as embodied cognition. This school of thought emphasizes the importance of the body and environment in shaping your mental processes. Proponents of this view argue that cognition isn’t just a property of the brain, but instead emerges from continuous interactions between your brain, your body, and the world. If so, then your body's individual traits and experiences heavily influence your perceptions, your reasoning, and how you communicate them.
However, critics of this theory argue that it falls short of providing satisfactory explanations for the vast majority of cognitive phenomena. They suggest that embodied cognition’s core principles are either too vague to be useful or are nonsensical when applied to most aspects of cognition. Critics say that while the body undoubtedly influences the mind in some ways, embodied cognition can’t replace traditional theories of cognition.
From Neuroscience to AI
In addition to helping us better understand the human brain, the Thousand Brains Theory offers a promising avenue for advancing artificial intelligence research. To achieve Artificial General Intelligence (AGI)—a machine capable of learning and performing any intellectual task that a human can—Hawkins says that AI researchers should shift their approach to more closely mimic the structure and function of the neocortex. There are still many advantages the human brain has over AI, but researchers could design new AI structures to mimic the model-building aspects of the neocortex, while discarding any aspects of the Old Brain that aren’t needed to build an intelligent machine.
While AI has proven successful in specific domains, such as pattern recognition and language generation, Hawkins points out that current AI development techniques are fundamentally different from how the brain operates. One key difference between brains and AI is that brains learn continuously, constantly updating their mental models based on new experiences. Modern AIs, on the other hand, are trained on fixed datasets before being put in use and don’t learn or adapt during operation. Additionally, human brains are capable of mastering a wide range of skills, while AI systems are typically designed for narrow, specific tasks.
(Shortform note: Hawkins isn’t the first to suggest that AI research can benefit by more closely modeling the processes of the brain. In Rebooting AI, Gary Marcus and Ernest Davis criticize current AI research for its narrow focus on “big data” and machine learning to the exclusion of other important tools, such as making use of current advances in neuroscience. While they don’t drill down to the level of cortical columns, as Hawkins does, Marcus and Davis assert that AGI will have to incorporate multiple processing systems and algorithms—just as the brain does—to interpret the real world’s multitude of inputs. AGI will also need a framework to mimic the brain’s understanding of how objects and ideas interact and change over time.)
To overcome AI’s current limitations, Hawkins suggests that researchers could develop AI architectures that create reference frames and mental models in a way similar to how the cortical columns of the neocortex process information. These AI systems would learn by actively exploring and interacting with their environment, building and refining models based on sensory input and motor feedback. Reference frames, which are already used in robotics, could enable AI systems to build models of knowledge and relationships rather than the current AI approach which relies solely on statistical probabilities.
(Shortform note: Inspired by Hawkins’s theory about the brain, researchers at Carnegie Mellon University are developing Cortical Columns Computing Systems that aim to replicate the neocortex's structure in digital hardware. By integrating attributes of cortical columns and their reference frames into electronic neural networks, they hope to create energy-efficient processing units capable of continual learning. In addition to mimicking brain-like cognition, this approach seeks to achieve brain-like efficiency in computing, which could potentially lead to significant improvements in how much power and energy AIs use compared to current machine learning systems with large environmental footprints.)
To create truly intelligent machines, Hawkins says it won’t be necessary to replicate all aspects of the human brain. “Old Brain” functions, such as survival instincts and emotions, could be omitted or replaced with an “Old Brain equivalent” that handles low-level functions like movement and automatic reflexes. However, while AI doesn’t need animal traits such as hunger, fear, or the drive to reproduce, having a physical or virtual presence in the world is likely crucial for AI to develop true intelligence and motivations. By equipping AI with a variety of sensors and the ability to direct its attention as we do, researchers can give AI the necessary basis for learning and model-building in the real world.
Frameworks for Cognition
In Rebooting AI, Marcus and Davis propose a software-based solution to replicate the natural instincts that Hawkins says AGI will need. Marcus and Davis suggest that AI researchers should focus on developing preprogrammed knowledge frameworks that could, for example, give AI a basis for connecting words to their meanings. A preprogrammed, instinctive grounding in real-world object relationships and context would let an AI learn on its own by expanding upon its preexisting knowledge. An AI would do this by exploring its environment in the context of its knowledge frames, much like the human brain does with reference frames.
However, not everyone agrees with Hawkins that replicating human cognition is possible or even desirable. In Mind Over Machine and What Computers Still Can’t Do, philosopher Hubert Dreyfus argues that much of human intelligence and reasoning comes not just from the structure of the brain, but from cultural and experiential knowledge, which data-driven computers can’t emulate. Meanwhile, other researchers argue that we don’t need AGI at all because AI in its current form will soon surpass human abilities in many ways, and the belief that machines must think the way we do is a limited, human-centric attitude.
The Dangers of Intelligence
The prospect of creating more powerful AI systems gives many people pause, but are the potential hazards of AI any worse than what humans are capable of on their own? Both AI and human intelligence pose potential risks to society, but the nature and scale of those risks differ. Hawkins argues that AI’s risks are often overstated, while human intelligence, with its primitive drives and capacity for self-deception, poses a more immediate and demonstrable threat to the world. In essence, we project our own human evils on machines that don't have the basic primal urges that lie at the root of many human problems.
The concerns surrounding AI revolve around the possibility that it may become too powerful and autonomous for us to control. In one scenario, AIs create even more advanced AI that far surpass human comprehension, leaving mere human civilization in the dust. However, Hawkins doubts this is a valid concern—no matter how smart an AI becomes, the physical limitations of the real world would still constrain its ability to act and the pace at which it could acquire knowledge. After all, gaining knowledge requires conducting experiments and gathering data, both of which are limited to the pace at which physical research can be done.
(Shortform note: Despite how unlikely Hawkins says this scenario is, there are those who not only think it will happen, but who believe it will be a net-positive outcome. In The Singularity Is Near, futurist Ray Kurzweil argues that the rate of technological progress, especially in the realm of AI, will soon be accelerating so fast that it’s impossible to know what’s on the other side. Nevertheless, Kurzweil predicts there will be an intermediate “ramping-up” phase during which AIs will expand their knowledge base. If, during this time, we learn to use AI to augment human intelligence, then we ourselves will move forward as well, keeping pace with our computer counterparts as human intelligence shifts more and more into the digital realm.)
Another fear is that AI's goals and interests may diverge from those of its human creators, leading to AI acting against human interests—the “robot apocalypse” frequently featured in dystopian science fiction. Again, Hawkins isn’t worried—this scenario rests on several unlikely assumptions, such as that AI systems would suddenly ignore the commands and inputs provided by humans, and that the AIs would have unchecked access to enough physical resources to carry out their sinister plans. Hawkins also notes that in humans, the human drives behind such behavior arise in the impulses of the Old Brain, while an AI based on the human neocortex wouldn’t share those motivations unless explicitly programmed to do so.
(Shortform note: Though Hawkins argues that AI won’t turn against humans the way other humans do, that doesn’t mean that AI shares humanity’s interests by default. Embedding human values into AI has become a significant challenge for companies developing AI-based products. First, they must explicitly define what those values are—for instance, human privacy and safety. Developers must design ways to limit AI behavior while establishing oversight mechanisms to ensure that AIs act as they should. Until AI is truly intelligent, we have to assume that it can fail in harmful ways, if not by design then by accident, just like any other tool.)
The Real Threat Is Us
In contrast to AI, human intelligence has a proven track record of causing harm through crime, corruption, and conquest—actions driven by the primitive desires of the Old Brain. Since the cortical columns in the “intelligent” neocortex filter their inputs and outputs through the Old Brain, Hawkins argues that the Old Brain has an immense amount of power over what we perceive and how we act. Instead of overriding the Old Brain’s drives, your neocortex gives power to our baser instincts by providing them with weapons and tools they otherwise wouldn’t have access to. Often, the best the neocortex can do is to redirect the old brain, such as by suggesting safe outlets for its instincts rather than trying to stop them entirely.
(Shortform note: Managing your Old Brain may not be as hopeless as Hawkins implies. In The Chimp Paradox, psychiatrist Steve Peters describes the relationship between the Old Brain and the neocortex as a balancing act—at any given time, your reactions and decisions are controlled by one or the other, and you can sometimes recognize the conflict between them. Peters agrees that you can’t change your instincts, but he says you can manage them by learning to recognize when they’re taking over, understanding how they operate, and planning healthy ways to satisfy your instinctual needs in advance. The trouble with humans that Hawkins might suggest is that this kind of self-control is hard to replicate in large enough groups to prevent widespread harm.)
Moreover, human perception and cognition are susceptible to errors and biases, resulting in inaccurate mental models of the world that can lead to serious consequences. Though some cognitive errors are trivial—such as making a wrong turn on the way to work—others can be severe—such as mistaking a toy water pistol for a deadly weapon. Hawkins warns that the most insidious flawed mental models are those that spread from person to person as self-propagating ideas, such as a belief in racial superiority. Unlike individual cognitive errors, viral “bad ideas”from the neocortex can lead to widespread harm. In short, the human race hardly needs AI for intelligence to pose a danger to our species.
How We Get It Wrong
Hawkins presents our human capacity for error as if it’s a flaw in our cognition, but in Being Wrong, Kathryn Schulz suggests that the errors we make are just a side effect of the cognitive processes that make our brains efficient. Since, as Hawkins also states, our conscious minds don’t receive the “raw data” from our senses, evolution designed our brains to fill in the gaps in any data we receive. Doing so gives us a more cohesive awareness of our surroundings, which therefore aids our survival in the wild—but it also opens the door to error, particularly when your brain’s “best guess” to fill its sensory gaps turns out to be wrong.
The problem with potentially faulty beliefs based on inaccurate mental models is that they don’t exist in isolation. Because we learn many beliefs from other people, not only can harmful ideas spread, as in the scenario Hawkins describes, but once they’ve done so, they’re very hard to shake. Schulz explains the shortcut our brains use—we don’t judge someone else’s ideas on their merit. Instead, we decide if that person can be trusted, and if so, we accept their beliefs on faith. This shortcut multiplies one person’s bad judgment by that of many others, and when a harmful belief is shared by a group, the social pressure to maintain that belief can overwhelm any conflicting evidence that your neocortex might receive.
Into the Future
Addressing the dangers posed by both artificial and human intelligence will require ongoing research and dialogue. Assuming that we make an effort to address those issues, AI systems based on human cognition have the potential to greatly benefit humanity and its legacy. We can already predict the short-term uses of AI, but in the long run, the AIs we create might carry the human legacy into places we can’t yet conceive of.
By simulating the features of the neocortex, Hawkins says that Artificial General Intelligence (AGI) will be capable of learning and adapting to a wide range of tasks, making it more versatile and efficient than the specialized AI systems we have today. This flexibility will allow AGI to tackle complex, real-world problems, such as climate change or economic recessions, that require an understanding of multiple domains, from science and technology to psychology and social dynamics. Moreover, AGI could facilitate the rapid spread of knowledge, since learning acquired by one AI system should be easy to transfer to others, accelerating the pace of discovery and invention.
(Shortform note: The benefits of AGI that Hawkins describes don’t rely solely on AI. In The Future Is Faster Than You Think, Peter H. Diamandis and Steven Kotler explain that advances in different fields of technology, including AI, are merging. This convergence speeds up the rate of progress because innovations in one field can propel another type of technology ahead. For example, AI can make robots more functional, letting them execute more complex tasks, while advances in sensor technologies can accelerate AI’s capability to analyze patterns and make predictions. In other words, better cameras, microphones, and touch sensors can accelerate AI toward more human-like cognition, which then feeds back into other realms of progress.)
Beyond the practical applications we can envision today, AGI may open up entirely new possibilities that we can’t yet imagine. While some have speculated that AGI could enable a form of digital immortality by letting us upload human consciousness into machines, Hawkins doesn’t think that will ever be possible. The intricate connections between the neocortex, the Old Brain, and the body itself will make it challenging to capture the full essence of your consciousness in a digital format. Besides, Hawkins says, even if successful, all you’ll have done is made a digital copy of yourself instead of extending your own lifespan.
(Shortform note: The difficulties Hawkins foresees in trying to digitally simulate a brain haven’t stopped researchers from trying. From 2013 to 2023, the Human Brain Project attempted to create a fully functional simulation of the brain. It fell short of its goal, but it was able to model over 200 brain regions and made discoveries that are used to treat neurological disorders. However, because computing power is advancing, so too is brain-mapping and modeling technology. In 2016, Ken Hayworth of the Brain Preservation Foundation predicted that it would take two years to map the brain of a fly, but in 2023, researchers mapped the entire brain of a mouse at a resolution 1,000 times greater than that of a normal MRI.)
However, Hawkins thinks that AGI could let us preserve human knowledge, creativity, and history beyond the lifespan of our species. By creating intelligent digital “offspring” that can carry our legacy into the future, the human race may one day disappear, but our achievements and insights won’t be lost. AGI based on human ways of thinking could build upon the foundation of our knowledge, making new discoveries and exploring parts of the universe that will forever remain beyond human reach. To realize this vision, AGI must not only replicate the functions of the human brain, but also be imbued with the curiosity and drive to learn and explore that have propelled human progress throughout history.
(Shortform note: If Hawkins is correct that AGI may be the human race’s long-lasting legacy, then that may give us another strong reason to imbue AI with human values. In The Singularity Is Near, Kurzweil—though steadfastly optimistic for the future—warns that there may be no defense against AGI if it becomes more intelligent and capable than we are. Unchecked, AI will likely empower humanity’s worst instincts as well as its good ones, and the only solution Kurzweil offers is to make sure that AGI, once we achieve it, learns and grows from the best we have to offer. If AI is to be humanity’s offspring, then Kurzweil states that like any good parents, we should guide it by presenting the best version of ourselves.)
Want to learn the rest of A Thousand Brains in 21 minutes?
Unlock the full book summary of A Thousand Brains by signing up for Shortform.
Shortform summaries help you learn 10x faster by:
- Being 100% comprehensive: you learn the most important points in the book
- Cutting out the fluff: you don't spend your time wondering what the author's point is.
- Interactive exercises: apply the book's ideas to your own life with our educators' guidance.
Here's a preview of the rest of Shortform's A Thousand Brains PDF summary:
What Our Readers Say
This is the best summary of A Thousand Brains I've ever read. I learned all the main points in just 20 minutes.
Learn more about our summaries →Why are Shortform Summaries the Best?
We're the most efficient way to learn the most useful ideas from a book.
Cuts Out the Fluff
Ever feel a book rambles on, giving anecdotes that aren't useful? Often get frustrated by an author who doesn't get to the point?
We cut out the fluff, keeping only the most useful examples and ideas. We also re-organize books for clarity, putting the most important principles first, so you can learn faster.
Always Comprehensive
Other summaries give you just a highlight of some of the ideas in a book. We find these too vague to be satisfying.
At Shortform, we want to cover every point worth knowing in the book. Learn nuances, key examples, and critical details on how to apply the ideas.
3 Different Levels of Detail
You want different levels of detail at different times. That's why every book is summarized in three lengths:
1) Paragraph to get the gist
2) 1-page summary, to get the main takeaways
3) Full comprehensive summary and analysis, containing every useful point and example