Skip to main content

Our Reactions to Robots Tell Us Something About Ourselves

Robot football players at a Valparaiso University College of Engineering event

I have been thinking lately about robots, which creates an interesting asymmetry: They almost certainly have not been thinking about me. Nevertheless, I find that I often respond to robots as though they have thoughts about me, or about their own personal goals, or about the world in which they exist. That tendency suggests an interesting aspect of human psychology, connected to our social minds. We are hard-wired to care what other people think about us, and we very easily extend that concern to robots.

Here's a recent article about how the language-learning app Duolingo, which features an owl-shaped avatar (a kind of robot), uses "emotional blackmail" to keep application users engaged: https://uxdesign.cc/20-days-of-emotional-blackmail-from-duolingo-4f566523e3c5 This bird-shaped bit of code tells users things like "you're scaring me!" and "I miss you" if they haven't logged in for a few days. Many people find the emotion-laden messages quite distressing, and log back in to appease the owl -- which, remember, doesn't really feel anything at all. Duo the owl also has a range of happy and sad facial expressions, as do the other cartoony denizens of the app. Why should I care that Zari the artificial teenager is proud of me for completing my daily Spanish lesson? By any rational standard I should not, and yet I do. China has harnessed people's desire for social approval in perhaps more concerning ways, using social networks to shape behavior that the government considers to be in the people's best interest.

The following video shows some physically embodied robots, which in terms of consciousness are no different from Duo the language-learning bird (that is to say, they have none), but which push our buttons in a different way:

There are three types of machines in robot football: One that hikes the ball, one that catches it in a net, and a third type that resembles a computer case on wheels, which attempts to run into the net-wielding robot and make it fumble the ball. The very fact that these three types of robots exist makes me read some intentionality into their behavior: Don't the tank-like rectangular robots seem more aggressive to you? Don't the poor ball-carrying robots seem a bit beleaguered? And then, at another level, I notice my tendency to attribute personalities to the individual machines: Why is that net robot moving around at the start when the others are all standing still -- does it have a rebellious attitude? Why is the tank-like robot spinning in circles in the middle of the floor -- is it confused? And that other net robot speeding across the room -- is it the star player, eager to demonstrate its skills? None of these attributions are true, of course. The people on the sidelines helping to direct the robots might have personalities, but the robots certainly don't. And even if the robots were fully autonomous, I think I would still be drawing conclusions about their internal mental lives.

Our reactions to robots are a function of the social-connectedness goals held by our Intuitive minds. Earlier humans had to live within small groups that relied on one another for survival, which made the pressure for conformity intense. In medieval England, for example, there was no police force, yet there was also minimal crime. A person who transgressed against their neighbors would be subjected to punishments that today seem very harsh, ranging from public humiliation in the village stocks up to exile in the forest, which without the resources of the village was tantamount to execution. The reason our Intuitive minds care so very deeply what others think of us is because they evolved in a context where humans who didn't have those instincts didn't survive. 

Our tendency to read intention into the actions of robots is an example of what psychologists call "theory of mind" -- our ability to guess what's going on inside the head of another person. This is a uniquely human ability: There's some evidence that AI systems can now fake it, as they mimic other aspects of human language, but I'm very doubtful that an AI has an actual sense of what it's like to be a person (that's because I'm also very doubtful that AI is, or could ever be, conscious). Of course, we can never know for sure about another person's internal experience -- they might not even have any, in which case they would be a biological robot (also called a "philosophical zombie"). But I'm pretty sure that you are having an internal experience as you read this, just as I had one while I was writing it. And I think my internal theory of mind is good enough that my guess about your experience is reasonably accurate (my guess: feeling calm, interested, and maybe a little bored, hungry, or tired -- a description that probably covers two-thirds of people at any given time on any given day!). 

The problem is that my theory of mind feels so convincing that I can't turn it off when a piece of software starts to exhibit human-like behavior. That's not a big deal with football-playing robots. But as AI systems learn to act more like human beings, it's a tendency that could steer me wrong eventually. New York Times columnist Ross Douthat points out that most of our fiction about robots assumes they will have conscious, rational awareness like humans without having the emotional range of humans -- Mr. Data's difficulty expressing emotion on Star Trek is typical. But contemporary AI systems are very convincing liars in describing their emotions, values, hopes, and dreams (if anything, logic is where they have some trouble!). In other words, science fiction has prepared us for robotic systems that still can't pass the Turing test, but relatively low-tech AI mimicry can now do that very consistently. Perhaps we will all need to learn to be a little less concerned about feelings and focus more concretely on what we can observe and understand, if we're going to succeed in this brave new world of emotionally manipulative cartoon owls.

Comments

Popular posts from this blog

Why Does Psychotherapy Work? Look to the Intuitive Mind for Answers

  Jerome Frank's 1961 book Persuasion and Healing  popularized the idea of "common factors" that explain the benefits of psychotherapy, building on ideas that were first articulated by Saul Rosenzweig in 1936 and again by Sol Garfield in 1957. Frank's book emphasized the importance of (a) the therapeutic relationship, (b) the therapist's ability to explain the client's problems, (c) the client's expectation of change, and (d) the use of healing rituals. Later theorists emphasized other factors like feedback and empathy that are sub-components of the therapeutic relationship, and that can be clearly differentiated from specific behavior-change techniques like cognitive restructuring or behavioral reinforcement . Additional aspects of therapy that are sometimes identified as common factors include the opportunity to confront difficult past experiences, the opportunity for a "corrective emotional experience" with the therapist, and the chance t...

Trump vs. Truth: the Whorfian Hypothesis Revisited

Image of protest against the 1918 Sedition Act during World War I. That act was repealed in 1920. Many concerning things have happened in the United States over the past month, but the one I'd like to write about today is an effort to win arguments by redefining terms. In a recent article titled "In Trump's Washington, Words Become Weaponized," the New York Times  presents a variety of examples in which the White House's recent Executive Orders use terms in ways that are unusual, or in some cases literally opposite from the term's plain-language meaning. Here are some notable instances (if you're up on all the news, you can skip to the part after the bullets, but I do like to document my sources): The term "DEI" (for  diversity, equity, and inclusion ) was used as a pejorative in President Trump's press conference after a January 30 airplane crash, in which he said that "we need to have our smartest people" as air traffic controll...

Chatbot Changes and Challenges in 2023

I wrote last summer  about artificial intelligence tools that are increasingly able to approximate human speech in free-form conversations. These tools then burst onto the public stage with the release of OpenAI's ChatGPT  at the end of November last year. As you probably know by now, the acronym "GPT" stands for "generative pre-trained transformer," which highlights the three most important aspects of this technology: (1) it generates novel responses that aren't based on an a specific algorithm or decision rule, but instead rely on pattern recognition; (2) it has been pre-trained  by consuming massive amounts of writing from the Internet -- much more than a human could read in several lifetimes; and (3) it transforms  those prior writing samples using a trial-and-error process that predicts the next phrase in a sequence until it has come up with a response that seems intelligible to humans. ChatGPT works much like the auto-complete feature in your email or ...