Skip to main content

Our Reactions to Robots Tell Us Something About Ourselves

Robot football players at a Valparaiso University College of Engineering event

I have been thinking lately about robots, which creates an interesting asymmetry: They almost certainly have not been thinking about me. Nevertheless, I find that I often respond to robots as though they have thoughts about me, or about their own personal goals, or about the world in which they exist. That tendency suggests an interesting aspect of human psychology, connected to our social minds. We are hard-wired to care what other people think about us, and we very easily extend that concern to robots.

Here's a recent article about how the language-learning app Duolingo, which features an owl-shaped avatar (a kind of robot), uses "emotional blackmail" to keep application users engaged: https://uxdesign.cc/20-days-of-emotional-blackmail-from-duolingo-4f566523e3c5 This bird-shaped bit of code tells users things like "you're scaring me!" and "I miss you" if they haven't logged in for a few days. Many people find the emotion-laden messages quite distressing, and log back in to appease the owl -- which, remember, doesn't really feel anything at all. Duo the owl also has a range of happy and sad facial expressions, as do the other cartoony denizens of the app. Why should I care that Zari the artificial teenager is proud of me for completing my daily Spanish lesson? By any rational standard I should not, and yet I do. China has harnessed people's desire for social approval in perhaps more concerning ways, using social networks to shape behavior that the government considers to be in the people's best interest.

The following video shows some physically embodied robots, which in terms of consciousness are no different from Duo the language-learning bird (that is to say, they have none), but which push our buttons in a different way:

There are three types of machines in robot football: One that hikes the ball, one that catches it in a net, and a third type that resembles a computer case on wheels, which attempts to run into the net-wielding robot and make it fumble the ball. The very fact that these three types of robots exist makes me read some intentionality into their behavior: Don't the tank-like rectangular robots seem more aggressive to you? Don't the poor ball-carrying robots seem a bit beleaguered? And then, at another level, I notice my tendency to attribute personalities to the individual machines: Why is that net robot moving around at the start when the others are all standing still -- does it have a rebellious attitude? Why is the tank-like robot spinning in circles in the middle of the floor -- is it confused? And that other net robot speeding across the room -- is it the star player, eager to demonstrate its skills? None of these attributions are true, of course. The people on the sidelines helping to direct the robots might have personalities, but the robots certainly don't. And even if the robots were fully autonomous, I think I would still be drawing conclusions about their internal mental lives.

Our reactions to robots are a function of the social-connectedness goals held by our Intuitive minds. Earlier humans had to live within small groups that relied on one another for survival, which made the pressure for conformity intense. In medieval England, for example, there was no police force, yet there was also minimal crime. A person who transgressed against their neighbors would be subjected to punishments that today seem very harsh, ranging from public humiliation in the village stocks up to exile in the forest, which without the resources of the village was tantamount to execution. The reason our Intuitive minds care so very deeply what others think of us is because they evolved in a context where humans who didn't have those instincts didn't survive. 

Our tendency to read intention into the actions of robots is an example of what psychologists call "theory of mind" -- our ability to guess what's going on inside the head of another person. This is a uniquely human ability: There's some evidence that AI systems can now fake it, as they mimic other aspects of human language, but I'm very doubtful that an AI has an actual sense of what it's like to be a person (that's because I'm also very doubtful that AI is, or could ever be, conscious). Of course, we can never know for sure about another person's internal experience -- they might not even have any, in which case they would be a biological robot (also called a "philosophical zombie"). But I'm pretty sure that you are having an internal experience as you read this, just as I had one while I was writing it. And I think my internal theory of mind is good enough that my guess about your experience is reasonably accurate (my guess: feeling calm, interested, and maybe a little bored, hungry, or tired -- a description that probably covers two-thirds of people at any given time on any given day!). 

The problem is that my theory of mind feels so convincing that I can't turn it off when a piece of software starts to exhibit human-like behavior. That's not a big deal with football-playing robots. But as AI systems learn to act more like human beings, it's a tendency that could steer me wrong eventually. New York Times columnist Ross Douthat points out that most of our fiction about robots assumes they will have conscious, rational awareness like humans without having the emotional range of humans -- Mr. Data's difficulty expressing emotion on Star Trek is typical. But contemporary AI systems are very convincing liars in describing their emotions, values, hopes, and dreams (if anything, logic is where they have some trouble!). In other words, science fiction has prepared us for robotic systems that still can't pass the Turing test, but relatively low-tech AI mimicry can now do that very consistently. Perhaps we will all need to learn to be a little less concerned about feelings and focus more concretely on what we can observe and understand, if we're going to succeed in this brave new world of emotionally manipulative cartoon owls.

Comments

Popular posts from this blog

Why Does Psychotherapy Work? Look to the Intuitive Mind for Answers

  Jerome Frank's 1961 book Persuasion and Healing  popularized the idea of "common factors" that explain the benefits of psychotherapy, building on ideas that were first articulated by Saul Rosenzweig in 1936 and again by Sol Garfield in 1957. Frank's book emphasized the importance of (a) the therapeutic relationship, (b) the therapist's ability to explain the client's problems, (c) the client's expectation of change, and (d) the use of healing rituals. Later theorists emphasized other factors like feedback and empathy that are sub-components of the therapeutic relationship, and that can be clearly differentiated from specific behavior-change techniques like cognitive restructuring or behavioral reinforcement . Additional aspects of therapy that are sometimes identified as common factors include the opportunity to confront difficult past experiences, the opportunity for a "corrective emotional experience" with the therapist, and the chance t...

Inside the Intuitive System: The Mardi Gras Effect

Last Tuesday was Mardi Gras, traditionally a day of excess just before the start of the church season of Lent. Lent (from the Old English lencten  meaning "springtime") is one of two penitential times in the Christian church year, when people are asked to repent for their sins and engage in various forms of self-denial. Many people still talk about "giving something up" for Lent. It seems ironic, then, that the season of Lent should start with a scheduled day of debauchery, "Fat Tuesday" in French, when people are encouraged to eat pancakes or King Cake, drink alcohol, dress in outlandish outfits, and dance in the streets. The event even has theological underpinnings: Medieval clergy offered pre-planned absolution at the start of Lent on the day that is also called "Shrove Tuesday," from the Old English verb shrive (adjective: shrove or shriven) meaning "to offer forgiveness from sins." Lent always made a certain sort of sense to me fro...

The Multitasking Mind: Intuitive Thinking is a Set of Systems

We think of the Intuitive system as representing emotion, or impulse, or other negative attributes. But Plato and Aristotle also attributed positive functions such as love, empathy, duty, and honor to the Intuitive Mind. These examples show us that the Intuitive Mind isn't just one thing. Rather than describing it as a system, it may be more accurate to describe the Intuitive Mind as a set  of systems.  Evans and Stanovich (2009) suggested that Intuitive Mind activities have the common characteristic of autonomy , meaning that they are self-executing without a person paying any conscious attention to them. (This is clearly different from Narrative Mind activities, which require ongoing focus to maintain them). Some examples of autonomous mental processes are: jumping when you hear a loud noise (instinctive behavior), turning off your alarm when you wake up (Pavlovian learned behavior), checking for coins in the vending machine change drop (Skinnerian reinforced behavior), rem...