Skip to main content

How Should We Now Talk About Vaccines?


Vaccines have emerged as a hot topic for Federal policy-making this year -- in particular, the positions and beliefs that last year would have been called "vaccine hesitancy" -- and that's the topic I am revisiting this week. Accordingly, this seems like a good time to state that I write this blog as an individual scholar and a concerned American citizen, not directly as part of my faculty assignment at the University of Colorado. (Nobody in academia will be surprised that what actually counts for my annual review is peer-reviewed journal articles, and that public-facing communication is viewed as a potential distraction!) I greatly appreciate the university as an environment for my research, and the laws of the Regents of the University of Colorado that support free speech and academic freedom. But it's also important to note that my writing here is separate from my teaching and my funded research programs, and in no way reflects any official position of the university. 

I wrote a blog post in 2022 about parents' hesitancy to give their children the COVID-19 vaccine. Much of that post still seems correct to me: People are more hesitant to vaccinate when (a) they feel pressured by authority figures; (b) the risks of a vaccine seem more salient than its benefits; (c) there is a high level of uncertainty around a newly developed treatment, and in the case of COVID-19 also around a continually evolving disease; and (d) when they have weak personal relationships with their health care providers. But another piece of the vaccine decision-making equation, patients' health-related knowledge, has become more contentious. Here's the sticky part from my previous post:

Parents with more medical knowledge are generally more accepting of vaccines, but improving parental knowledge is not as simple as just providing information. Parents’ attitudes can be affected not only by valid scientific information, but also by misinformation or disinformation. Unfortunately, up to 80% of Americans say that social media is an important source of knowledge about vaccines, and social media platforms may be uniquely capable of disseminating false stories. Some misinformation comes from vendors seeking to profit from unvalidated treatments, while active disinformation might come from speakers seeking political actionThe World Health Organization is fighting this 'infodemic' with projects such as the Information Network for Epidemics (EPI-WIN), which responds to questions posted on social media platforms with science-based answers. 

One of the most common rebuttals by people who don't want to vaccinate is some variation of "quit saying that I'm stupid." And indeed, the medical establishment's response to people who didn't want to take COVID-19 vaccines was often some variation of that sentiment. The court of public opinion could be even more harsh, particularly via the comments on social media. And many organizations required vaccination if people wanted to keep their jobs or keep sending their children to school, which interacted with people's natural hesitation about being forced to do things by authorities. Finally, the significant political thread in the above text might have felt blaming to people who for one reason or another had given their allegiance to a particular candidate, perhaps reasons that had to do with the economy rather than with vaccines. I think it's important to revisit our language around vaccines, so that we don't alienate people who have different views. Science benefits the most when we have open lines of communication about all possibilities, rather than refusing to hear some viewpoints.

Although communication experts differentiate between misinformation (unintentional falsehoods) and disinformation (deliberately misleading information), both of these labels convey a value judgment. People who believe what they are hearing simply see it as information. I am trying the label “empirically unsupported” to give due weight to patients’ beliefs, while also acknowledging that they are not backed by the type of scientific evidence that most professionals have been trained to demand. Empirically unsupported beliefs are not necessarily false, and in fact may feel very true to an individual patient's experience. But scientists who expect a certain standard of proof may say that these beliefs are not "true" because they don't meet that standard. In some cases the problem is that the belief is actively contradicted by peer-reviewed scientific studies, in which case we need to ask whether there might reasonably be exceptions, or details not captured by the available studies, or some other reason a person might hold an empirically unsupported belief. In other cases the belief just hasn't been adequately tested, which was a particular issue in an emerging pandemic like COVID-19. I have written before about the "three-legged stool" of evidence-based practice, which depends not only on expert judgment and scientific evidence, but also on patient preferences and choices. During the COVID-19 pandemic, when experts were arguing for a range of positions and the science was moving very fast, many of us gave little thought to the patient-preference part of the stool.

Empirically unsupported concerns about potential harms of the COVID-19 vaccine were in fact the single most common reason that parents gave for not vaccinating their children. One reason people believe things that are empirically unsupported might be that they only have access to certain information – e.g., if they can’t get in to see their health care provider, if they aren't trained to read the scientific articles that are the main source of current information, or if they only get their news from certain outlets. Alternately, people may not trust the sources of information that are available to them; one common critique of Dr. Fauci was that he promoted vaccines for financial gain by way of pharmaceutical investments (this allegation was not found to be credible by fact-finding websites). Or people might have different standards for what counts as “evidence” than health care professionals do, relying more on their own experiences and those of family and friends, or what they hear from political or religious communities that they belong to. I wrote early in the COVID pandemic about how people's values and community affiliations shaped their beliefs about the disease, using Jonathan Haidt's moral taxonomy of people's Intuitive-level goals in society.

Finally, people may be legitimately frustrated by the changing state of scientific knowledge, which leads to different recommendations over time. In fact, science is never“certain,” only in some cases overwhelmingly supportive. Medical professionals with scientific training were not immune from spreading empirically unsupported beliefs during the COVID-19 pandemic. Unfortunately, doubts about the newly developed COVID-19 vaccines led some people to have doubts about vaccines in general that may not be warranted. A CDC post-pandemic review found that the agency had significantly mis-handled its public messaging, over-exaggerating the science in some areas and resisting it in others (e.g., downplaying the evidence that SARS-CoV-2 was airborne; this was clear to scientists well before CDC's public health guidelines were changed to account for that fact). Ironically, CDC has been a leading funder of research on how to provide public health messages effectively; the problem during the early pandemic seems to be that events were moving very fast and many people with different roles in the administration were giving conflicting messages; as a result, CDC did not follow the best communication practices that the agency itself had established. Similarly, many in the medical establishment denied the existence of Long COVID as a medical syndrome, until a "citizen scientist" initiative began to document consistent symptoms and other long-term health problems after SARS-CoV-2 infection. These scientific messaging failures further fueled a general distrust for experts that had already been growing in American society for some time, and that was found to predict people's uptake of the COVID-19 vaccine.

Empirically unsupported concerns are often surprisingly “sticky,” hanging around long after studies have been conducted to address the underlying questions. New data are often not enough because people have cognitive biases like availability bias (information that’s repeated more often seems more believable) and selective attention (people pay more attention to information that agrees with their presuppositions, and tend to discount information that is inconsistent or surprising based on what they already believe). The endowment effect from Prospect Theory may again play a role – once we have decided, there is a perceived cost to changing our position. 

Much of the scientific controversy around COVID-19 can be attributed to a "fog of war" condition that prevailed during the pandemic, as people struggled to adapt and death tolls mounted. Let's talk about the elephant in the room, though, which is the fact that certain sources appear to have spread empirically unsupported ideas for personal or political gain. During the early pandemic, a large number of empirically unsupported statements about SARS-CoV-2 were in posts that also emphasized political themes connected to then-President Trump. Current NIH Director Dr. Jay Bhattacharya wrote the "Great Barrington Declaration" in 2020 calling for an end to social distancing measures in order to build herd immunity by letting more people be exposed to SARS-CoV-2. And by far the most common source of empirically unsupported information during that time period was current Health & Human Services Secretary Robert F. Kennedy Jr. Those same folks are now in charge of the agencies that make official government recommendations, which is scrambling the usual process by which parents often make decisions about their children's vaccination schedules. 

One potential strategy for dealing with misinformation is just to ignore it, which is often recommended because repeating false information may have the inadvertent effect of reinforcing it. But if a theory has already gained traction, then some of the strategies to debunk it involve consistency, vividness of the new information, authority of the people providing countervailing information, similarity of those people to oneself, and other factors that are typically involved in establishing the credibility of ideas. Or it might be possible to reframe the issue so that it appears to be a new question based on new data – e.g., the recommendation to use high-filtration masks rather than cloth masks – which might persuade people to give it a new look. And one recent study showed that providing information about how people benefit from others' adoption of empirically unsupported beliefs can increase their willingness to consider alternative information. (The finding mirrors one from adolescent smoking-cessation campaigns, where the key message "cigarette companies are lying to you" was much more persuasive than information about smoking's health risks). A focus on values (consistent with Haidt's theory) is probably important. And at a meta level, it may help to teach people the "four moves and a habit" to think critically about any online information they come across. 

But step one is probably just to stop using loaded terms like "misinformation" and "conspiracy theories," to listen to the underlying concerns behind empirically unsupported beliefs, and to pay more attention to the patient-preference component of evidence-based practice. As scientists, we don't know everything, and a more realistic acknowledgment of that fact might help to bridge the gap in current conversations about vaccines.

Comments

Popular posts from this blog

Our Reactions to Robots Tell Us Something About Ourselves

Robot football players at a Valparaiso University College of Engineering event I have been thinking lately about robots, which creates an interesting asymmetry: They almost certainly have not been thinking about me. Nevertheless, I find that I often respond  to robots as though they have thoughts about me, or about their own personal goals, or about the world in which they exist. That tendency suggests an interesting aspect of human psychology, connected to our social minds . We are hard-wired to care what other people think about us, and we very easily extend that concern to robots. Here's a recent article about how the language-learning app Duolingo, which features an owl-shaped avatar (a kind of robot), uses "emotional blackmail" to keep application users engaged:  https://uxdesign.cc/20-days-of-emotional-blackmail-from-duolingo-4f566523e3c5  This bird-shaped bit of code tells users things like "you're scaring me!" and "I miss you" if they haven...

Chatbot Changes and Challenges in 2023

I wrote last summer  about artificial intelligence tools that are increasingly able to approximate human speech in free-form conversations. These tools then burst onto the public stage with the release of OpenAI's ChatGPT  at the end of November last year. As you probably know by now, the acronym "GPT" stands for "generative pre-trained transformer," which highlights the three most important aspects of this technology: (1) it generates novel responses that aren't based on an a specific algorithm or decision rule, but instead rely on pattern recognition; (2) it has been pre-trained  by consuming massive amounts of writing from the Internet -- much more than a human could read in several lifetimes; and (3) it transforms  those prior writing samples using a trial-and-error process that predicts the next phrase in a sequence until it has come up with a response that seems intelligible to humans. ChatGPT works much like the auto-complete feature in your email or ...

Inside the Intuitive Mind: Social Support Can Facilitate or Inhibit Behavior Change

  This week I'm looking at another concrete tool in the behavior-change armamentarium, social support . I have written previously about the Narrative mind's strong focus on social cues , and indeed perhaps the Narrative system evolved specifically to help us coordinate our behavior with groups of other humans. As a behavior-change strategy, social support can be used in several different ways. Instrumental Social Support . The most basic form of social support is instrumental, the type of help that a neighbor gives in loaning you a tool or that a friend provides in bringing you a meal. This type of concrete support can be helpful for diet change -- e.g., here are some fresh vegetables from my garden -- or exercise -- e.g., you can borrow my tent for your camping trip. Although instrumental support is particularly powerful because someone is actually doing something for you or giving you resources that you don't have, it is also usually short-term (I probably don't want...