What’s the future of artificial intelligence? How can our understanding of the human brain revolutionize AI?
According to Jeff Hawkins, AI development based on his Thousand Brains Theory could lead to exciting advancements. He proposes that AI systems should mimic the structure and function of the brain’s neocortex.
Read on to explore how Hawkins’s ideas might influence the future of AI.
From Neuroscience to AI
In addition to helping us better understand the human brain, the Thousand Brains Theory offers a promising avenue for advancing artificial intelligence research. According to Jeff Hawkins, AI researchers should shift their approach to more closely mimic the structure and function of the neocortex if we are to achieve Artificial General Intelligence (AGI)—a machine capable of learning and performing any intellectual task that a human can. There are still many advantages the human brain has over AI, but researchers could design new AI structures to mimic the model-building aspects of the neocortex, while discarding any aspects of the Old Brain that aren’t needed to build an intelligent machine.
While AI has proven successful in specific domains, such as pattern recognition and language generation, Hawkins points out that current AI development techniques are fundamentally different from how the brain operates. One key difference between brains and AI is that brains learn continuously, constantly updating their mental models based on new experiences. Modern AIs, on the other hand, are trained on fixed datasets before being put in use and don’t learn or adapt during operation. Additionally, human brains are capable of mastering a wide range of skills, while AI systems are typically designed for narrow, specific tasks.
(Shortform note: Hawkins isn’t the first to suggest that AI research can benefit by more closely modeling the processes of the brain. In Rebooting AI, Gary Marcus and Ernest Davis criticize current AI research for its narrow focus on “big data” and machine learning to the exclusion of other important tools, such as making use of current advances in neuroscience. While they don’t drill down to the level of cortical columns, as Hawkins does, Marcus and Davis assert that AGI will have to incorporate multiple processing systems and algorithms—just as the brain does—to interpret the real world’s multitude of inputs. AGI will also need a framework to mimic the brain’s understanding of how objects and ideas interact and change over time.)
To overcome AI’s current limitations, Hawkins suggests that researchers could develop AI architectures that create reference frames and mental models in a way similar to how the cortical columns of the neocortex process information. These AI systems would learn by actively exploring and interacting with their environment, building and refining models based on sensory input and motor feedback. Reference frames, which are already used in robotics, could enable AI systems to build models of knowledge and relationships rather than the current AI approach which relies solely on statistical probabilities.
(Shortform note: Inspired by Hawkins’s theory about the brain, researchers at Carnegie Mellon University are developing Cortical Columns Computing Systems that aim to replicate the neocortex’s structure in digital hardware. By integrating attributes of cortical columns and their reference frames into electronic neural networks, they hope to create energy-efficient processing units capable of continual learning. In addition to mimicking brain-like cognition, this approach seeks to achieve brain-like efficiency in computing, which could potentially lead to significant improvements in how much power and energy AIs use compared to current machine learning systems with large environmental footprints.)
To create truly intelligent machines, Hawkins says it won’t be necessary to replicate all aspects of the human brain. “Old Brain” functions, such as survival instincts and emotions, could be omitted or replaced with an “Old Brain equivalent” that handles low-level functions like movement and automatic reflexes. However, while AI doesn’t need animal traits such as hunger, fear, or the drive to reproduce, having a physical or virtual presence in the world is likely crucial for AI to develop true intelligence and motivations. By equipping AI with a variety of sensors and the ability to direct its attention as we do, researchers can give AI the necessary basis for learning and model-building in the real world.
Frameworks for Cognition In Rebooting AI, Marcus and Davis propose a software-based solution to replicate the natural instincts that Hawkins says AGI will need. Marcus and Davis suggest that AI researchers should focus on developing preprogrammed knowledge frameworks that could, for example, give AI a basis for connecting words to their meanings. A preprogrammed, instinctive grounding in real-world object relationships and context would let an AI learn on its own by expanding upon its preexisting knowledge. An AI would do this by exploring its environment in the context of its knowledge frames, much like the human brain does with reference frames. However, not everyone agrees with Hawkins that replicating human cognition is possible or even desirable. In Mind Over Machine and What Computers Still Can’t Do, philosopher Hubert Dreyfus argues that much of human intelligence and reasoning comes not just from the structure of the brain, but from cultural and experiential knowledge, which data-driven computers can’t emulate. Meanwhile, other researchers argue that we don’t need AGI at all because AI in its current form will soon surpass human abilities in many ways, and the belief that machines must think the way we do is a limited, human-centric attitude. |
The Dangers of Intelligence
The prospect of creating more powerful AI systems gives many people pause, but are the potential hazards of AI any worse than what humans are capable of on their own? Both AI and human intelligence pose potential risks to society, but the nature and scale of those risks differ. Hawkins argues that AI’s risks are often overstated, while human intelligence, with its primitive drives and capacity for self-deception, poses a more immediate and demonstrable threat to the world. In essence, we project our own human evils on machines that don’t have the basic primal urges that lie at the root of many human problems.
The concerns surrounding AI revolve around the possibility that it may become too powerful and autonomous for us to control. In one scenario, AIs create even more advanced AI that far surpass human comprehension, leaving mere human civilization in the dust. However, Hawkins doubts this is a valid concern—no matter how smart an AI becomes, the physical limitations of the real world would still constrain its ability to act and the pace at which it could acquire knowledge. After all, gaining knowledge requires conducting experiments and gathering data, both of which are limited to the pace at which physical research can be done.
(Shortform note: Despite how unlikely Hawkins says this scenario is, there are those who not only think it will happen, but who believe it will be a net-positive outcome. In The Singularity Is Near, futurist Ray Kurzweil argues that the rate of technological progress, especially in the realm of AI, will soon be accelerating so fast that it’s impossible to know what’s on the other side. Nevertheless, Kurzweil predicts there will be an intermediate “ramping-up” phase during which AIs will expand their knowledge base. If, during this time, we learn to use AI to augment human intelligence, then we ourselves will move forward as well, keeping pace with our computer counterparts as human intelligence shifts more and more into the digital realm.)
Another fear is that AI’s goals and interests may diverge from those of its human creators, leading to AI acting against human interests—the “robot apocalypse” frequently featured in dystopian science fiction. Again, Hawkins isn’t worried—this scenario rests on several unlikely assumptions, such as that AI systems would suddenly ignore the commands and inputs provided by humans, and that the AIs would have unchecked access to enough physical resources to carry out their sinister plans. Hawkins also notes that in humans, the human drives behind such behavior arise in the impulses of the Old Brain, while an AI based on the human neocortex wouldn’t share those motivations unless explicitly programmed to do so.
(Shortform note: Though Hawkins argues that AI won’t turn against humans the way other humans do, that doesn’t mean that AI shares humanity’s interests by default. Embedding human values into AI has become a significant challenge for companies developing AI-based products. First, they must explicitly define what those values are—for instance, human privacy and safety. Developers must design ways to limit AI behavior while establishing oversight mechanisms to ensure that AIs act as they should. Until AI is truly intelligent, we have to assume that it can fail in harmful ways, if not by design then by accident, just like any other tool.)
The Real Threat Is Us
In contrast to AI, human intelligence has a proven track record of causing harm through crime, corruption, and conquest—actions driven by the primitive desires of the Old Brain. Since the cortical columns in the “intelligent” neocortex filter their inputs and outputs through the Old Brain, Hawkins argues that the Old Brain has an immense amount of power over what we perceive and how we act. Instead of overriding the Old Brain’s drives, your neocortex gives power to our baser instincts by providing them with weapons and tools they otherwise wouldn’t have access to. Often, the best the neocortex can do is to redirect the old brain, such as by suggesting safe outlets for its instincts rather than trying to stop them entirely.
(Shortform note: Managing your Old Brain may not be as hopeless as Hawkins implies. In The Chimp Paradox, psychiatrist Steve Peters describes the relationship between the Old Brain and the neocortex as a balancing act—at any given time, your reactions and decisions are controlled by one or the other, and you can sometimes recognize the conflict between them. Peters agrees that you can’t change your instincts, but he says you can manage them by learning to recognize when they’re taking over, understanding how they operate, and planning healthy ways to satisfy your instinctual needs in advance. The trouble with humans that Hawkins might suggest is that this kind of self-control is hard to replicate in large enough groups to prevent widespread harm.)
Moreover, human perception and cognition are susceptible to errors and biases, resulting in inaccurate mental models of the world that can lead to serious consequences. Though some cognitive errors are trivial—such as making a wrong turn on the way to work—others can be severe—such as mistaking a toy water pistol for a deadly weapon. Hawkins warns that the most insidious flawed mental models are those that spread from person to person as self-propagating ideas, such as a belief in racial superiority. Unlike individual cognitive errors, viral “bad ideas”from the neocortex can lead to widespread harm. In short, the human race hardly needs AI for intelligence to pose a danger to our species.
How We Get It Wrong Hawkins presents our human capacity for error as if it’s a flaw in our cognition, but in Being Wrong, Kathryn Schulz suggests that the errors we make are just a side effect of the cognitive processes that make our brains efficient. Since, as Hawkins also states, our conscious minds don’t receive the “raw data” from our senses, evolution designed our brains to fill in the gaps in any data we receive. Doing so gives us a more cohesive awareness of our surroundings, which therefore aids our survival in the wild—but it also opens the door to error, particularly when your brain’s “best guess” to fill its sensory gaps turns out to be wrong. The problem with potentially faulty beliefs based on inaccurate mental models is that they don’t exist in isolation. Because we learn many beliefs from other people, not only can harmful ideas spread, as in the scenario Hawkins describes, but once they’ve done so, they’re very hard to shake. Schulz explains the shortcut our brains use—we don’t judge someone else’s ideas on their merit. Instead, we decide if that person can be trusted, and if so, we accept their beliefs on faith. This shortcut multiplies one person’s bad judgment by that of many others, and when a harmful belief is shared by a group, the social pressure to maintain that belief can overwhelm any conflicting evidence that your neocortex might receive. |
Into the Future
Addressing the dangers posed by both artificial and human intelligence will require ongoing research and dialogue. Assuming that we make an effort to address those issues, AI systems based on human cognition have the potential to greatly benefit humanity and its legacy. We can already predict the short-term uses of AI, but in the long run, the AIs we create might carry the human legacy into places we can’t yet conceive of.
By simulating the features of the neocortex, Hawkins says that Artificial General Intelligence (AGI) will be capable of learning and adapting to a wide range of tasks, making it more versatile and efficient than the specialized AI systems we have today. This flexibility will allow AGI to tackle complex, real-world problems, such as climate change or economic recessions, that require an understanding of multiple domains, from science and technology to psychology and social dynamics. Moreover, AGI could facilitate the rapid spread of knowledge, since learning acquired by one AI system should be easy to transfer to others, accelerating the pace of discovery and invention.
(Shortform note: The benefits of AGI that Hawkins describes don’t rely solely on AI. In The Future Is Faster Than You Think, Peter H. Diamandis and Steven Kotler explain that advances in different fields of technology, including AI, are merging. This convergence speeds up the rate of progress because innovations in one field can propel another type of technology ahead. For example, AI can make robots more functional, letting them execute more complex tasks, while advances in sensor technologies can accelerate AI’s capability to analyze patterns and make predictions. In other words, better cameras, microphones, and touch sensors can accelerate AI toward more human-like cognition, which then feeds back into other realms of progress.)
Beyond the practical applications we can envision today, AGI may open up entirely new possibilities that we can’t yet imagine. While some have speculated that AGI could enable a form of digital immortality by letting us upload human consciousness into machines, Hawkins doesn’t think that will ever be possible. The intricate connections between the neocortex, the Old Brain, and the body itself will make it challenging to capture the full essence of your consciousness in a digital format. Besides, Hawkins says, even if successful, all you’ll have done is made a digital copy of yourself instead of extending your own lifespan.
(Shortform note: The difficulties Hawkins foresees in trying to digitally simulate a brain haven’t stopped researchers from trying. From 2013 to 2023, the Human Brain Project attempted to create a fully functional simulation of the brain. It fell short of its goal, but it was able to model over 200 brain regions and made discoveries that are used to treat neurological disorders. However, because computing power is advancing, so too is brain-mapping and modeling technology. In 2016, Ken Hayworth of the Brain Preservation Foundation predicted that it would take two years to map the brain of a fly, but in 2023, researchers mapped the entire brain of a mouse at a resolution 1,000 times greater than that of a normal MRI.)
However, Hawkins thinks that AGI could let us preserve human knowledge, creativity, and history beyond the lifespan of our species. By creating intelligent digital “offspring” that can carry our legacy into the future, the human race may one day disappear, but our achievements and insights won’t be lost. AGI based on human ways of thinking could build upon the foundation of our knowledge, making new discoveries and exploring parts of the universe that will forever remain beyond human reach. To realize this vision, AGI must not only replicate the functions of the human brain, but also be imbued with the curiosity and drive to learn and explore that have propelled human progress throughout history.
(Shortform note: If Hawkins is correct that AGI may be the human race’s long-lasting legacy, then that may give us another strong reason to imbue AI with human values. In The Singularity Is Near, Kurzweil—though steadfastly optimistic for the future—warns that there may be no defense against AGI if it becomes more intelligent and capable than we are. Unchecked, AI will likely empower humanity’s worst instincts as well as its good ones, and the only solution Kurzweil offers is to make sure that AGI, once we achieve it, learns and grows from the best we have to offer. If AI is to be humanity’s offspring, then Kurzweil states that like any good parents, we should guide it by presenting the best version of ourselves.)