Skip to main content

Our Reactions to Robots Tell Us Something About Ourselves

Robot football players at a Valparaiso University College of Engineering event

I have been thinking lately about robots, which creates an interesting asymmetry: They almost certainly have not been thinking about me. Nevertheless, I find that I often respond to robots as though they have thoughts about me, or about their own personal goals, or about the world in which they exist. That tendency suggests an interesting aspect of human psychology, connected to our social minds. We are hard-wired to care what other people think about us, and we very easily extend that concern to robots.

Here's a recent article about how the language-learning app Duolingo, which features an owl-shaped avatar (a kind of robot), uses "emotional blackmail" to keep application users engaged: https://uxdesign.cc/20-days-of-emotional-blackmail-from-duolingo-4f566523e3c5 This bird-shaped bit of code tells users things like "you're scaring me!" and "I miss you" if they haven't logged in for a few days. Many people find the emotion-laden messages quite distressing, and log back in to appease the owl -- which, remember, doesn't really feel anything at all. Duo the owl also has a range of happy and sad facial expressions, as do the other cartoony denizens of the app. Why should I care that Zari the artificial teenager is proud of me for completing my daily Spanish lesson? By any rational standard I should not, and yet I do. China has harnessed people's desire for social approval in perhaps more concerning ways, using social networks to shape behavior that the government considers to be in the people's best interest.

The following video shows some physically embodied robots, which in terms of consciousness are no different from Duo the language-learning bird (that is to say, they have none), but which push our buttons in a different way:

There are three types of machines in robot football: One that hikes the ball, one that catches it in a net, and a third type that resembles a computer case on wheels, which attempts to run into the net-wielding robot and make it fumble the ball. The very fact that these three types of robots exist makes me read some intentionality into their behavior: Don't the tank-like rectangular robots seem more aggressive to you? Don't the poor ball-carrying robots seem a bit beleaguered? And then, at another level, I notice my tendency to attribute personalities to the individual machines: Why is that net robot moving around at the start when the others are all standing still -- does it have a rebellious attitude? Why is the tank-like robot spinning in circles in the middle of the floor -- is it confused? And that other net robot speeding across the room -- is it the star player, eager to demonstrate its skills? None of these attributions are true, of course. The people on the sidelines helping to direct the robots might have personalities, but the robots certainly don't. And even if the robots were fully autonomous, I think I would still be drawing conclusions about their internal mental lives.

Our reactions to robots are a function of the social-connectedness goals held by our Intuitive minds. Earlier humans had to live within small groups that relied on one another for survival, which made the pressure for conformity intense. In medieval England, for example, there was no police force, yet there was also minimal crime. A person who transgressed against their neighbors would be subjected to punishments that today seem very harsh, ranging from public humiliation in the village stocks up to exile in the forest, which without the resources of the village was tantamount to execution. The reason our Intuitive minds care so very deeply what others think of us is because they evolved in a context where humans who didn't have those instincts didn't survive. 

Our tendency to read intention into the actions of robots is an example of what psychologists call "theory of mind" -- our ability to guess what's going on inside the head of another person. This is a uniquely human ability: There's some evidence that AI systems can now fake it, as they mimic other aspects of human language, but I'm very doubtful that an AI has an actual sense of what it's like to be a person (that's because I'm also very doubtful that AI is, or could ever be, conscious). Of course, we can never know for sure about another person's internal experience -- they might not even have any, in which case they would be a biological robot (also called a "philosophical zombie"). But I'm pretty sure that you are having an internal experience as you read this, just as I had one while I was writing it. And I think my internal theory of mind is good enough that my guess about your experience is reasonably accurate (my guess: feeling calm, interested, and maybe a little bored, hungry, or tired -- a description that probably covers two-thirds of people at any given time on any given day!). 

The problem is that my theory of mind feels so convincing that I can't turn it off when a piece of software starts to exhibit human-like behavior. That's not a big deal with football-playing robots. But as AI systems learn to act more like human beings, it's a tendency that could steer me wrong eventually. New York Times columnist Ross Douthat points out that most of our fiction about robots assumes they will have conscious, rational awareness like humans without having the emotional range of humans -- Mr. Data's difficulty expressing emotion on Star Trek is typical. But contemporary AI systems are very convincing liars in describing their emotions, values, hopes, and dreams (if anything, logic is where they have some trouble!). In other words, science fiction has prepared us for robotic systems that still can't pass the Turing test, but relatively low-tech AI mimicry can now do that very consistently. Perhaps we will all need to learn to be a little less concerned about feelings and focus more concretely on what we can observe and understand, if we're going to succeed in this brave new world of emotionally manipulative cartoon owls.

Comments

Popular posts from this blog

Prototypes and Willingness: The Theory of Planned Behavior Revisited

  You may recall my blog post from last year on the Theory of Planned Behavior (TPB) , titled "in praise of a failed model." My evaluation of this model was that it accurately describes the Narrative Mind, which does control intentions. But the ultimate goal of the TPB is to predict behavior, and the relationship between intentions and behavior is weak at best -- in fact, it is entirely attributable to the fact that when someone says they don't intend to do something, they probably won't do it. When they say they do intend to do it, their actual results are no better than chance, a result of the intention-behavior gap as described in Two Minds Theory.  The full TPB is shown in this diagram: Cognitive constructs like attitudes, subjective norms, and perceived behavioral control (i.e., self-efficacy) are Narrative-system phenomena, and they do indeed have relationships with each other and with intentions (which are also products of the Narrative Mind). Perceived behavi...

Leventhal's Common-Sense Model and Two Minds Theory

Leventhal, Diefenbach, and Leventhal's (1992) "common sense model" of self-regulation. My 2018 paper describing Two Minds Theory (TMT) cites work by my colleague and coauthor Dr. Paula Meek, who conducted studies of patients experiencing the symptom of breathlessness due to chronic obstructive pulmonary disorder (COPD). Paula's research used a model by Howard and Elaine Leventhal (with Michael Diefenbach) that was an early iteration of the dual-process approach also used in TMT. She found that people who focused their attention on different aspects of the feeling of breathlessness then in turn had different interpretations of what that symptom meant for them, and that those interpretations changed their perception of the symptom's intensity. This example illustrates a back-and-forth between perceptions and thoughts, which is characteristic of Leventhal's model. Leventhal's dual-process model, sometimes called the "common sense model" of self-reg...