Skip to main content

Ethical Improvement in the New Year

 

Just after the first of the year is prime time for efforts to change our behavior, whether that's joining a gym, a "dry January" break from alcohol, or going on a diet. (See my previous post about New Year's resolutions for more health behavior examples). This year I'd like to consider ethical resolutions -- ways in which we try to change our behavior or upgrade our character to live more in line with our values. 

Improving ethical behavior has been historically seen as the work of philosophers, or the church. But more recent psychological approaches have tried to explain morality using some of the same theories that are commonly used to understand health behaviors based on Narrative constructs like self-efficacy, intentions, and beliefs. Gerd Gigerenzer suggests that an economic model of "satisficing" might explain moral behavior based on limited information and the desire to achieve good-enough rather than optimal results. Others have used simulation exercises or gamification strategies to improve college students' moral reasoning by applying behaviorist principles. Still others have looked at whether mindfulness practices result in the development of ethical awareness. In line with Haidt's Moral Foundations Theory, some researchers have taken an Intuitive-mind approach to ethics by examining its links with unconscious perceptions and assumptions. And researchers with a more neuropsychological bent have studied moral enhancement using deep-brain stimulation (yes, really!). Overall, this field of research looks a lot like behavior-change research in other areas, with many competing approaches and contradictions.

In a series of studies, philosophers Eric Schwitzgebel and Joshua Rust have tested the efficacy of traditional reason-based approaches to moral development by examining the actual behaviors of philosophers who study ethics. Their studies show that ethicists are no more likely to return their library books on time than non-ethicist philosophers or faculty in other academic disciplines -- in fact, ethics books were somewhat more likely to be missing from the library shelves. Ethicists are no more likely to vote in U.S. elections based on public records. Ethics sessions at academic conferences have about the same amount of rude behaviors (talking during the presentation, slamming doors, leaving behind cups or garbage) as other sessions, with the one exception that environmental ethicists were less likely to leave behind trash. Ethicists were no more likely than their peers to respond to student emails, and they were no more likely to complete a survey in return for a charitable donation being made on their behalf. On self-report measures, ethicists reported calling their mothers with about the same frequency as other faculty (the only group that called less frequently involved those who said that calling or not calling was "morally neutral," and within that group the philosophers were the worst procrastinators). And ethicists who said that eating meat was at least somewhat morally bad still ate meat at the same level (and sometimes more) than their non-ethicist colleagues. Across all of these measures, then, the authors could find little evidence that ethicists -- people who specialize in the study of what humans "should" do -- have no greater success at being ethical in their own lives.

Schwitzgebel's book A Theory of Jerks interprets the discrepancy between ethicists' publicly expressed attitudes and their actual measured behaviors in the words of his 7-year-old son: "the kids who talk about being fair and sharing mostly just want you to be fair to them and share with them." In other words, it's a lot easier to say how you want other people to behave than it is to modify your own behavior! Followers of this blog will recognize this actor-observer discrepancy as a classic example of an intention-behavior gap. We likely shouldn't judge the ethicists too harshly, because their hypocrisy is probably unintentional. The problem is that their stated beliefs are Narratives, while Intuitive-mind factors such as cues and habits actually control their day-to-day behaviors. (Haidt presents additional evidence of this discrepancy in his "moral dumbfounding" studies, where people intuitively felt that some behavior was wrong but when pressed could give no narrative reason for it). In some cases, it appears that ethicists simply have more finely-developed after-the-fact rationalization methods -- Schwitzgebel suggests a few, called things like the "happy coincidence defense" and the "most-you-can-do sweet spot" -- that help them to feel less bad about their failings. But they probably didn't go about their lives intending to eat meat or neglect their mothers. Those things "just happened" (due to the workings of their Intuitive minds) and their Narrative minds deployed well-honed tools of argument to justify their behaviors later on.

What can we do about this, then, if our goal is to live more ethical lives? It's not a fair assessment to say that philosophy or religion cannot help us in this endeavor. But we do need to recognize the limits of narratives in controlling our behavior. They work best when first considered dispassionately, in the abstract, and then tentatively applied to various hypothetical scenarios, before being actively deployed in the rush of everyday life. Only when a narrative has become a well-developed habit of mind is it likely to crop up during a real-life ethical challenge, and even then there's a tendency for circumstances to take over. We can productively deploy other behavior-change tools like social pressure or small rewards to shape behaviors in the way that we would rationally like them to proceed. China's "social credit" system, for all its concerning aspects, is probably more likely to produce observable moral behavior than our Western self-deterministic model. Relying too much on our rational principles alone is just going to set us up, like the ethicists, for hypocrisy.

Comments

Popular posts from this blog

Chatbot Changes and Challenges in 2023

I wrote last summer  about artificial intelligence tools that are increasingly able to approximate human speech in free-form conversations. These tools then burst onto the public stage with the release of OpenAI's ChatGPT  at the end of November last year. As you probably know by now, the acronym "GPT" stands for "generative pre-trained transformer," which highlights the three most important aspects of this technology: (1) it generates novel responses that aren't based on an a specific algorithm or decision rule, but instead rely on pattern recognition; (2) it has been pre-trained  by consuming massive amounts of writing from the Internet -- much more than a human could read in several lifetimes; and (3) it transforms  those prior writing samples using a trial-and-error process that predicts the next phrase in a sequence until it has come up with a response that seems intelligible to humans. ChatGPT works much like the auto-complete feature in your email or ...

Our Reactions to Robots Tell Us Something About Ourselves

Robot football players at a Valparaiso University College of Engineering event I have been thinking lately about robots, which creates an interesting asymmetry: They almost certainly have not been thinking about me. Nevertheless, I find that I often respond  to robots as though they have thoughts about me, or about their own personal goals, or about the world in which they exist. That tendency suggests an interesting aspect of human psychology, connected to our social minds . We are hard-wired to care what other people think about us, and we very easily extend that concern to robots. Here's a recent article about how the language-learning app Duolingo, which features an owl-shaped avatar (a kind of robot), uses "emotional blackmail" to keep application users engaged:  https://uxdesign.cc/20-days-of-emotional-blackmail-from-duolingo-4f566523e3c5  This bird-shaped bit of code tells users things like "you're scaring me!" and "I miss you" if they haven...

Inside the Intuitive Mind: Social Support Can Facilitate or Inhibit Behavior Change

  This week I'm looking at another concrete tool in the behavior-change armamentarium, social support . I have written previously about the Narrative mind's strong focus on social cues , and indeed perhaps the Narrative system evolved specifically to help us coordinate our behavior with groups of other humans. As a behavior-change strategy, social support can be used in several different ways. Instrumental Social Support . The most basic form of social support is instrumental, the type of help that a neighbor gives in loaning you a tool or that a friend provides in bringing you a meal. This type of concrete support can be helpful for diet change -- e.g., here are some fresh vegetables from my garden -- or exercise -- e.g., you can borrow my tent for your camping trip. Although instrumental support is particularly powerful because someone is actually doing something for you or giving you resources that you don't have, it is also usually short-term (I probably don't want...