Artificial Intelligence (AI) researchers are finding the brain is less responsible for optimizing motion than previously thought.
“The traditional robotics model has the body following the brain, but in nature the brain follows the body,” Fumiya Iida, of MIT’s Computer Science and Artificial Intelligence Laboratory, explains. Decisions flow from the properties of the materials our bodies are made of and their interactions with the environment. When we pick up an object, we are able to hold it not primarily because of what our brain says but because our soft hands mold themselves around the object automatically, increasing surface contact and therefore frictional adhesion. When a cockroach encounters an irregular surface, it does not appeal to its brain to tell it what to do next; instead, its musculoskeletal system is designed so that local impacts drive its legs to the right position to take the next step.
Proponents of embodiment theory think a lot more of the control algorithms for movement are in the body rather than in the brain. Regardless of the extent to which that is true for humans taking this approach for the design of artificial devices turns out to be useful for producing machines that can move as well as biological creatures.
The biologist who discovered this last fact, Joseph Spagna, currently at the University of Illinois, teamed up with engineers at the University of California at Berkeley to build a robot inspired by nature. The result, named RHex (for its six legs), is a robot that can traverse varied terrain without any central processing at all. At first it had a lot of trouble moving across wire mesh with large, gaping holes. Spagna’s team made some simple, biologically inspired changes to the legs of the robot. Without altering the control algorithms, they simply added some spines and changed the orientation of the robot’s feet, both of which increased physical contact between the robot and the mesh. That was all it took to generate the intelligence required for the device to move ahead. In a related project, Iida and his MIT group are now building legs that operate with as few controlled joints and motors as possible, an engineering technique they call underactuation.
The theory that much of what we call intelligence is generated from the bottom up—that is, by the body—is now winning converts everywhere. (The unofficial motto of Iida’s group is “From Locomotion to Cognition.”) Some extreme adherents to this point of view, called embodiment theory, speculate that even the highest cognitive functions, including thought, do no more than regulate streams of intelligence rising from the body, much as the sound coming from a radio is modulated by turning the knobs. Embodiment theory suggests that much wisdom is indeed “wisdom of the body,” just as those irritating New Age gurus say.
This distributed approach makes problems easier to solve by making the problems smaller. I think we will achieve AI by solving lots of smaller problems on subsystems. Image processing algorithms, voice-to-text algorithms, and other algorithms will add up to many parts of what our brains do for us.