Are we on the brink of achieving human-level AI? What’s holding us back from creating truly intelligent machines?
In their book Rebooting AI, Gary Marcus and Ernest Davis argue that current AI approaches fall short of human-level intelligence. They propose that AI research should draw inspiration from the human brain’s efficiency and structure.
Continue reading for Marcus and Davis’s insights on building machines that think more like us.
Human-Level AI
Narrow AIs are computational black boxes where information goes in, passes through a single (if convoluted) algorithm, and then comes out reprocessed as a new result. That’s not how the human brain works, and Marcus and Davis argue that strong, human-level AI shouldn’t work like that either. Instead, AI research should draw on the efficiencies of the human brain, such as how it uses different processes for different types of information, how it creates abstract models of the world with which it interacts, and how it goes beyond correlating data to think in terms of causality—using mental models to understand how the external world changes over time.
Unlike narrow, single-system AI, the brain is a collection of systems that specialize in different types of information—sight, sound, and touch, for example—while regulating different forms of output, such as conscious and unconscious bodily functions. Likewise, Marcus and Davis write that strong AI should incorporate multiple processing systems and algorithms to handle the panoply of inputs and problems it will encounter in the real world. Also like the human brain, strong AI must be flexible enough to combine its different systems in whatever arrangement is needed at the moment, as humans do when we associate a memory with a smell, or when an artist engages both her visual and manual skills to produce a painting or a sculpture.
(Shortform note: The idea of modeling machine intelligence on the brain can be traced back to computer scientist Norbert Wiener’s book Cybernetics, published in 1948. However, most brain-based AI research has focused on simulating the behavior of neurons with little attention given to replicating the brain’s larger structures. That trend may be changing, with the development of a new type of neural network called a transformer. It mimics the brain’s hippocampus, which is closely tied to learning and memory. Transformers have proven useful in solving translation issues that plagued older AI frameworks, hinting that Marcus and Davis are correct that closer brain emulation is key to building more powerful AIs in the future.)
Mental Models and Causality
A more challenging but necessary step to strong AI will be designing knowledge frameworks that will let computers understand the relationships among entities, objects, and abstractions. These relations form the bedrock of human thought in the form of the mental models we use to interpret the world around us. Davis and Marcus state that modern AI developers largely disregard knowledge frameworks as an AI component. But without knowing how information interrelates, AI is unable to synthesize data from different sources and fields of knowledge. This ability is as crucial to advancing scientific progress as it is to driving a car in the snow while adjusting your schedule because of school closings—all of which AI could help with.
(Shortform note: Before AI can understand relationships between the real world and abstract concepts, it will first have to be able to perform abstract reasoning—solving problems conceptually through logic and imagination without complete data or exact prior experience. Swiss researchers have made progress in this field using a combination of machine learning and a “vector symbolic” architecture that’s conceptually similar to the knowledge frameworks Davis and Marcus recommend. Meanwhile, software engineer François Chollet has created the Abstraction and Reasoning Corpus, a tool for measuring how effectively AI systems perform basic abstract reasoning tasks as compared to humans presented with the same challenge.)
In addition to having a knowledge framework, strong AI must understand causality—how and why objects, people, and concepts change over time. Marcus and Davis say this will be hard to achieve because causality leads to correlation, but not all correlations are causal. For example, though most children like cookies (a correlation), enjoying cookies doesn’t cause childhood. Furthermore, AI will have to juggle causality in multiple fields of knowledge at once. For instance, an AI working on a legal case will have to understand how human motivations interact with physical evidence. At present, the only way for a computer to model causality is to run multiple simulations, but that’s far less efficient than how the human brain works. Therefore, we must design our strong AIs to learn these concepts the way that humans learn.
(Shortform note: It’s not accurate to say that our human understanding of causality isn’t based on simulations—we just call them stories instead. In Wired for Story, Lisa Cron explains that the human brain uses story structure to store information and make causal predictions, such as what will happen if you poke a sleeping bear. The difference between stories and computer simulations is that stories are largely symbolic, revolving around a few key elements rather than a plethora of details, and are therefore more versatile and efficient than full simulations. AI developers are now starting to use narrative structures to organize AI-generated content and to shape the data used for AI training in much the same way that the human brain parses data.)
Brain-Like Learning
When it comes to teaching computers how to think, we can draw from millions of years of human evolution and incorporate facets of the human brain’s tried-and-true approach to learning into machine cognition. While current machine learning operates from a starting point of pure data, Davis and Marcus argue that preprogrammed knowledge about the world similar to what humans are born with can facilitate stronger AI development. The authors describe how the brain learns by combining first-hand experience with preexisting knowledge, how AIs could use this tactic to construct their own knowledge frameworks, and why a hybrid approach to machine learning would be more powerful than the current data-only techniques.
When learning, people draw from two sources—high-level conceptualizations that are either instinctive or taught to us by others, and low-level details that we absorb through day-to-day experiences. Our preexisting, high-level knowledge provides a vital framework through which we interpret whatever we discover on our own. For example, we’re born knowing that food goes into our mouths and tastes good—as children, we then use this framework to determine what does and doesn’t qualify as food. However, Marcus and Davis report that AI developers shun the idea of preprogramming knowledge into neural networks, preferring their systems to learn from data alone, free of any context that would help it make sense.
(Shortform note: Psychologists classify humans’ instinctive knowledge into two broad categories—individual instincts, such as eating and self-protection, and social instincts like reproduction and play. Instincts are different from autonomic functions such as digestion and blood circulation in that they trigger complex behaviors and are tied to emotional responses. In Instinctive Computing, computer science professor Yang Cai discusses how and what types of instincts should be programmed into AI to bring it closer in line with biological intelligence while laying the groundwork for AI self-awareness, a topic that Marcus and Davis barely broach.)
Preprogrammed knowledge frameworks—like a set of “instincts” an AI would be born with— could greatly advance AI language comprehension. When humans read or listen to language, we construct a mental model of what’s being described based on our prior understanding of the world. Davis and Marcus argue that giving an AI a preprogrammed basis for connecting language to meaning would let it construct its own knowledge frameworks, just as humans learn over time. By insisting that AIs learn from raw data alone, developers tie their hands behind their backs and create an impossible task for AI, like giving a calculator the complete works of Shakespeare and expecting it to deduce the English language from scratch.
The Basis for Language in the Brain Some experts argue that a preprogrammed basis for comprehending language already exists in the natural world. In The Language Instinct, Steven Pinker explains that our human propensity for language is an evolutionary trait that’s hardwired into our brains. Even though no one’s born speaking English, Arabic, Latin, or Hindi—the specifics must be learned from outside data—we’re born with the ability to recognize patterns, apply meanings, and combine them in infinite variations: skills that form the building blocks of all languages. Our wiring for language is so strong that children who grow up without a native language instinctively create one of their own. As the history of technology shows, anything in nature can be artificially replicated, such as the brain-like innate language framework that Davis and Marcus recommend for AI. |
Marcus and Davis conclude that neither preprogrammed knowledge nor enormous data dumps are sufficient in themselves to teach computers how to think. The road to strong AI will require a combination of engineered cognitive models and large amounts of input data so that artificial intelligence can have a fighting chance to train itself. Knowledge frameworks can give AI the capability for logic and reason beyond their current parlor tricks of generating output from statistical correlations. Meanwhile, absorbing information from big data can give AI the experience to build its own cognitive models, growing beyond its programmers’ designs.
(Shortform note: Aligned with Marcus and Davis’s thoughts about modeling AI functions on the regions of the brain, software engineer Jarosław Wasowski writes that LLM cognitive models for processing data should be built on our understanding of how the human brain processes memory. These would include separate modules for encoding sensory and short-term memory, forgetting what’s irrelevant, and indexing what’s important into long-term storage. Once there, the AI could synthesize what it knows with a “knowledge module” comprising organizational tools, procedures, relationship structures, and past experience. In essence, Wasowski is researching exactly the combination of approaches Marcus and Davis recommend.)