Skip to main content

Inside the Intuitive System: What Robots Can Teach Us


MIT Roboticist Rodney Brooks described the development of autonomous robots in his paper "fast, cheap, and out of control." According to Brooks, when engineers first set out to design robots that could move independently through a built environment, they assumed that they would need to program every feature of that environment into a pre-set map representing the complex external environment inside the robot’s software. The problem was that they could never get this approach to work. It took up too much room in the robot’s limited memory, and too much time for the robot to access map data on the fly. Its processor would still be sifting through information while its wheels ran it off the top of a stairwell or into a wall.

The robotics team was ultimately successful with a much simpler approach: They gave the robot sensors and some simple rules for how to react to its environment. This is exactly how the well-known Roomba vacuum cleaner works: It sets off in a particular direction, continues until it comes within a certain distance of an obstacle like a wall or a drop-off, and at that point turns a certain number of degrees and continues on its way. If it encounters another obstacle, it keeps turning until it finds a clear path. If it runs low on power, it heads back to a charger that it can locate via a bluetooth signal. That's it. A Brooks-style robot has no internal representation of its environment. It behaves, but its behavior is based on simple procedural rules rather than information. It is reactive, not proactive. The same principles of robotics are used in much more high-stakes scenarios than vacuuming: Both the Sojourner and Opportunity Mars rovers used Brooks's principles of autonomy to successfully navigate and make observations on the Martian surface, completing missions that lasted for years with only the basic parameters being updated over time by Mission Control. Autonomy was key for the Mars landers because it takes more than six minutes for a signal to travel between Earth and Mars, far too slow for NASA engineers to drive the robots manually.

The best current neurocognitive models of human behavior suggest that we work in substantially the same way as these autonomous robots. Sensory input generates patterns in which clusters of neurons fire at the same time, and simultaneous activation makes them more likely to fire together again in the future. A particular pattern of neural activation can then become associated with a specific behavioral response. Behaviors are maintained, improved, or discontinued over time based on the principles of operant conditioning: Those that lead to a positive response from the environment are more likely to be repeated in the future, and those that result in negative consequences are less so. At the level of the Intuitive System, that’s it. In our case the responses are learned from experience instead of programmed in advance, but the basic concept is no different. Sensors generate input, and the type of sensor readings received produces a particular, usually adaptive, behavioral response. One can certainly envision situations where the basic rules lead us astray, and more complex rules may be needed to deal with these exceptions. For instance, the Roomba may need additional instructions to cope with different types of floor surfacing, and we may need to learn different cultural rules when traveling to a foreign country. But these are the exceptions that prove the rule: Most of the time, we get by very well when we operate on autopilot.

An example even closer to human neurobiology is the new field of artificial intelligence (AI), in which software is set loose on an unstructured dataset and “learns” over time to see what patterns of association exist. AI tools like IBM’s Watson are being used to guide surgical decision-making or monitor stock-trading activity, and AI methods underlie Google and Tesla’s efforts to develop self-driving cars. The fundamental approach of modern AI is to use simple algorithms to discover more complex rules, rather than having all of the rules programmed in advance — very similar to Brooks’s autonomous robots. AI can also use input about the success or failure of a particular behavior (e.g., a chess move) to refine its behavioral responses over time. Successful results lead to repetition, while failure leads to a search for a better strategy. And because AI can iterate very fast, it develops expertise with the benefit of thousands of times more experiences than a human could have in the same timeframe. This is how IBM’s Deep Blue AI defeated human chess masters, and Google’s DeepMind eventually conquered the much harder game Go. Even though AI machines seem to mimic the results that humans achieve through Narrative-level processes like logical analysis and symbolic manipulation, they actually use amuch less complex trial-and-error method to achieve the same results.

What robots can teach us, then, is that good results can come from bottom-up processing and simple steps used in combination. The keys to success using the Intuitive System are to pay attention to available data and to keep trying solutions until you get it right. The type of iterative process used in modern AI is a particularly instructive model because it learns the same way that humans do, using a combination of sensor data and obtained results to come up with better and better responses over time. A friend who was an elementary and middle school principal is fond of this quote: “good judgment comes from experience. And experience? That comes from bad judgment.” It’s equally true for human children and for self-improving AI algorithms.



Comments

Popular posts from this blog

Why Does Psychotherapy Work? Look to the Intuitive Mind for Answers

  Jerome Frank's 1961 book Persuasion and Healing  popularized the idea of "common factors" that explain the benefits of psychotherapy, building on ideas that were first articulated by Saul Rosenzweig in 1936 and again by Sol Garfield in 1957. Frank's book emphasized the importance of (a) the therapeutic relationship, (b) the therapist's ability to explain the client's problems, (c) the client's expectation of change, and (d) the use of healing rituals. Later theorists emphasized other factors like feedback and empathy that are sub-components of the therapeutic relationship, and that can be clearly differentiated from specific behavior-change techniques like cognitive restructuring or behavioral reinforcement . Additional aspects of therapy that are sometimes identified as common factors include the opportunity to confront difficult past experiences, the opportunity for a "corrective emotional experience" with the therapist, and the chance t

Loneliness: The New Health Risk

Nobody likes to feel lonely, but new research is showing that it can also be bad for your long-term health. People who are chronically lonely have been shown to experience higher rates of heart disease, diabetes, neurological disorders, and even premature death. Some common problems linked to loneliness include stress, cardiovascular disease (high blood pressure, stroke, heart attack), anxiety, depression, Alzheimer's disease or other forms of dementia, obesity, and substance use. These risks are great enough that the Surgeon General issued a recent advisory statement about loneliness as a risk to health, titled Our Epidemic of Loneliness and Isolation . The Surgeon General issues advisories when there is an "urgent public health issue" for the American people to consider and address; often these have been on mental health topics (e.g., social media  and mental health, health worker burnout , or youth mental health ).  Across all age groups, 10-35% of people say that th

Ethical Improvement in the New Year

  Just after the first of the year is prime time for efforts to change our behavior, whether that's joining a gym, a "dry January" break from alcohol, or going on a diet. (See my previous post about New Year's resolutions for more health behavior examples). This year I'd like to consider ethical resolutions -- ways in which we try to change our behavior or upgrade our character to live more in line with our values.  Improving ethical behavior has been historically seen as the work of philosophers, or the church. But more recent psychological approaches have tried to explain morality using some of the same theories that are commonly used to understand health behaviors based on Narrative constructs like self-efficacy, intentions, and beliefs. Gerd Gigerenzer suggests that an economic model of " satisficing " might explain moral behavior based on limited information and the desire to achieve good-enough rather than optimal results. Others have used simula