What are the challenges of AI awareness in robotics? How do we ensure AI can safely navigate the real world?
In Rebooting AI, Gary Marcus and Ernest Davis explore the limitations of AI systems in understanding and interacting with physical environments. They discuss why traditional programming and machine learning fall short of equipping robots with the skills needed to function in complex, real-world scenarios.
Read more to understand why strong AI is crucial for safe and effective real-world interactions.
AI Awareness
While language-processing AI ignores the wider world, mapping and modeling objects in space is of primary concern in the world of robotics. Since robots and the AIs that run them function in the physical realm beyond the safety of mere circuit pathways, they must observe and adapt to their physical surroundings. Davis and Marcus describe how both traditional programming and machine learning are inadequate for the task of AI awareness, and they explore why future AIs will need a broader set of capabilities than we currently have in development if they’re to safely navigate the real world.
Robots are hardly anything new—we’ve had robotic machinery in factories for decades, not to mention the robotic space probes we’ve sent to other planets. However, all our robots are limited in function and the range of environments in which they operate. Before neural networks, these machines were either directly controlled or programmed by humans to behave in certain ways under specific conditions to carry out their tasks and avoid a limited number of foreseeable obstacles.
(Shortform note: Despite Davis and Marcus’s statements to the contrary, we’ve developed robots that can overcome obstacles in environments outside of human control—most notably the autonomous rovers sent to explore the surface of Mars. Because of the lightspeed time delay between Earth and Mars, the rovers can’t be driven remotely. Instead, the rovers are empowered to pick objects for study based on their own direct observations, as well as to make their own course corrections to avoid potential hazards and other obstacles. The Perseverance rover autonomously explored nearly 18 kilometers of Mars’s surface in 2021 alone, surviving in a much more open terrain than the ideal laboratory conditions Davis and Marcus write about.)
Real-World Complexity
With machine learning’s growing aptitude for pattern recognition, neural network AIs operating in the real world are getting better at classifying the objects that they see. However, statistics-based machine learning systems still can’t grasp how real-world objects relate to each other, nor can AIs fathom how their environment can change from moment to moment. Researchers have made progress on the latter issue using video game simulations, but Marcus and Davis argue that an AI-driven robot in the real world could never have time to simulate every possible course of action. If someone steps in front of your self-driving car, you don’t want the car to waste too much time simulating the best way to avoid them.
Building Better Models and Simulations Researchers at MIT have made progress on the object relationship problem by devising a method for AI to interpret individual object relationships—such as a coffee cup sitting next to a keyboard—and combining those individual relationships to build a complete internal picture of a more complicated setting—such as the collection of interrelated objects cluttering your desktop at work. This “one relationship at a time” approach has been more successful than prior attempts to teach AI to process an entire array of object relationships at once, a task that human brains do instinctively. What the human brain can’t instinctively do is generate a multitude of simulations to explore the outcomes of many variables at once. While, as Marcus and Davis suggest, this may not be a helpful approach to real-time decision-making, generative AI simulations have proven useful in designing solutions to long-term problems in engineering, medicine, and finance. The lessons learned from these simulations are fed back to the AI in the next round of simulations; and while this has led to advancements in fields such as automotive design, it again raises the specter of model collapse if AI simulations reinforce their own mistakes. |
Davis and Marcus believe that an AI capable of safely interacting with the physical world must have a human-level understanding of how the world actually works. It must have a solid awareness of its surroundings, the ability to plan a course of action on its own, and the mental flexibility to adjust its behavior on the fly as things change. In short, any AI whose actions and decisions are going to have real-world consequences beyond the limited scope of one task needs to be strong AI, not the narrow AIs we’re developing now.
(Shortform note: Not everyone agrees that more humanlike AIs are even possible or desirable. In Mind Over Machine and What Computers Still Can’t Do, philosopher Hubert Dreyfus argues that too much of human intelligence and reasoning comes from cultural and experiential knowledge, which data-driven computers can’t emulate. This isn’t far from Davis and Marcus’s thesis that machine learning through big data isn’t enough to create strong AI. However, others argue that we don’t need strong AI at all—that narrow AI will soon surpass human abilities in many ways, and the belief that machines must think like us is a limited, human-centric attitude.)