One of the most common rebuttals by people who don't want to vaccinate is some variation of "quit saying that I'm stupid." And indeed, the medical establishment's response to people who didn't want to take COVID-19 vaccines was often some variation of that sentiment. The court of public opinion could be even more harsh, particularly via the comments on social media. And many organizations required vaccination if people wanted to keep their jobs or keep sending their children to school, which interacted with people's natural hesitation about being forced to do things by authorities. Finally, the significant political thread in the above text might have felt blaming to people who for one reason or another had given their allegiance to a particular candidate, perhaps reasons that had to do with the economy rather than with vaccines. I think it's important to revisit our language around vaccines, so that we don't alienate people who have different views. Science benefits the most when we have open lines of communication about all possibilities, rather than refusing to hear some viewpoints.
Although communication experts differentiate between misinformation (unintentional falsehoods) and disinformation (deliberately misleading information), both of these labels convey a value judgment. People who believe what they are hearing simply see it as information. I am trying the label “empirically unsupported” to give due weight to patients’ beliefs, while also acknowledging that they are not backed by the type of scientific evidence that most professionals have been trained to demand. Empirically unsupported beliefs are not necessarily false, and in fact may feel very true to an individual patient's experience. But scientists who expect a certain standard of proof may say that these beliefs are not "true" because they don't meet that standard. In some cases the problem is that the belief is actively contradicted by peer-reviewed scientific studies, in which case we need to ask whether there might reasonably be exceptions, or details not captured by the available studies, or some other reason a person might hold an empirically unsupported belief. In other cases the belief just hasn't been adequately tested, which was a particular issue in an emerging pandemic like COVID-19. I have written before about the "three-legged stool" of evidence-based practice, which depends not only on expert judgment and scientific evidence, but also on patient preferences and choices. During the COVID-19 pandemic, when experts were arguing for a range of positions and the science was moving very fast, many of us gave little thought to the patient-preference part of the stool.
Empirically unsupported concerns about potential harms of the COVID-19 vaccine were in fact the single most common reason that parents gave for not vaccinating their children. One reason people believe things that are empirically unsupported might be that they only have access to certain information – e.g., if they can’t get in to see their health care provider, if they aren't trained to read the scientific articles that are the main source of current information, or if they only get their news from certain outlets. Alternately, people may not trust the sources of information that are available to them; one common critique of Dr. Fauci was that he promoted vaccines for financial gain by way of pharmaceutical investments (this allegation was not found to be credible by fact-finding websites). Or people might have different standards for what counts as “evidence” than health care professionals do, relying more on their own experiences and those of family and friends, or what they hear from political or religious communities that they belong to. I wrote early in the COVID pandemic about how people's values and community affiliations shaped their beliefs about the disease, using Jonathan Haidt's moral taxonomy of people's Intuitive-level goals in society.
Finally, people may be legitimately frustrated by the changing state of scientific knowledge, which leads to different recommendations over time. In fact, science is never“certain,” only in some cases overwhelmingly supportive. Medical professionals with scientific training were not immune from spreading empirically unsupported beliefs during the COVID-19 pandemic. Unfortunately, doubts about the newly developed COVID-19 vaccines led some people to have doubts about vaccines in general that may not be warranted. A CDC post-pandemic review found that the agency had significantly mis-handled its public messaging, over-exaggerating the science in some areas and resisting it in others (e.g., downplaying the evidence that SARS-CoV-2 was airborne; this was clear to scientists well before CDC's public health guidelines were changed to account for that fact). Ironically, CDC has been a leading funder of research on how to provide public health messages effectively; the problem during the early pandemic seems to be that events were moving very fast and many people with different roles in the administration were giving conflicting messages; as a result, CDC did not follow the best communication practices that the agency itself had established. Similarly, many in the medical establishment denied the existence of Long COVID as a medical syndrome, until a "citizen scientist" initiative began to document consistent symptoms and other long-term health problems after SARS-CoV-2 infection. These scientific messaging failures further fueled a general distrust for experts that had already been growing in American society for some time, and that was found to predict people's uptake of the COVID-19 vaccine.
Empirically unsupported concerns are often surprisingly “sticky,” hanging around long after studies have been conducted to address the underlying questions. New data are often not enough because people have cognitive biases like availability bias (information that’s repeated more often seems more believable) and selective attention (people pay more attention to information that agrees with their presuppositions, and tend to discount information that is inconsistent or surprising based on what they already believe). The endowment effect from Prospect Theory may again play a role – once we have decided, there is a perceived cost to changing our position.
Much of the scientific controversy around COVID-19 can be attributed to a "fog of war" condition that prevailed during the pandemic, as people struggled to adapt and death tolls mounted. Let's talk about the elephant in the room, though, which is the fact that certain sources appear to have spread empirically unsupported ideas for personal or political gain. During the early pandemic, a large number of empirically unsupported statements about SARS-CoV-2 were in posts that also emphasized political themes connected to then-President Trump. Current NIH Director Dr. Jay Bhattacharya wrote the "Great Barrington Declaration" in 2020 calling for an end to social distancing measures in order to build herd immunity by letting more people be exposed to SARS-CoV-2. And by far the most common source of empirically unsupported information during that time period was current Health & Human Services Secretary Robert F. Kennedy Jr. Those same folks are now in charge of the agencies that make official government recommendations, which is scrambling the usual process by which parents often make decisions about their children's vaccination schedules.
One potential strategy for dealing with misinformation is just to ignore it, which is often recommended because repeating false information may have the inadvertent effect of reinforcing it. But if a theory has already gained traction, then some of the strategies to debunk it involve consistency, vividness of the new information, authority of the people providing countervailing information, similarity of those people to oneself, and other factors that are typically involved in establishing the credibility of ideas. Or it might be possible to reframe the issue so that it appears to be a new question based on new data – e.g., the recommendation to use high-filtration masks rather than cloth masks – which might persuade people to give it a new look. And one recent study showed that providing information about how people benefit from others' adoption of empirically unsupported beliefs can increase their willingness to consider alternative information. (The finding mirrors one from adolescent smoking-cessation campaigns, where the key message "cigarette companies are lying to you" was much more persuasive than information about smoking's health risks). A focus on values (consistent with Haidt's theory) is probably important. And at a meta level, it may help to teach people the "four moves and a habit" to think critically about any online information they come across.
But step one is probably just to stop using loaded terms like "misinformation" and "conspiracy theories," to listen to the underlying concerns behind empirically unsupported beliefs, and to pay more attention to the patient-preference component of evidence-based practice. As scientists, we don't know everything, and a more realistic acknowledgment of that fact might help to bridge the gap in current conversations about vaccines.

Comments
Post a Comment