Skip to main content

Two-Minds Strategies to Discover Truth in Science


A recent New York Times opinion piece titled "You Can Vote, But You Can't Choose What is True," highlighted the difference between opinion and empirical fact. This is a valuable reminder in the era of "fake news," when people choose what they want to believe and sometimes seem to pay more attention to the source of the news than its content. But Professor Harari's column makes a mistake in arguing that we should leave chemistry to the chemists, physics to the physicists, etc. In fact, we should be careful about over-idealizing science. When I wrote a blog post last year about the role of intuitive thinking in legal decision-making, several people wrote back to say in effect "that's all very well for lawyers, but fortunately scientific decisions are more rational." Unfortunately, this isn't true. Scientists' beliefs about the world are not necessarily any more valid than anyone else's, and scientists are just as prone to logical errors as anyone else. The great benefit of science is not its expertise, but its safeguards against believing a falsehood for too long. These safeguards, like those in the legal system, rely on both the Narrative and the Intuitive mind. Fortunately for all of us, the ultimate arbiter of truth in science the ultimate arbiter of truth is not experts, but the world around us.

The scientific method is all about testing ideas against reality. To learn something, we need to put forth the most audacious idea we can think of, and then use the most stringent methods available in an attempt to disprove it. This is a quintessentially Narrative process, following a careful algorithm for study design that allows us to rule out competing explanations like the investigator's professional interest or the patient's desire to please the investigator. The best studies, for example, involve direct manipulation of one variable to observe its effect on another, a feature that allows us to rule out coincidence. They also randomly assign patients to groups, which helps to ensure that the groups were as close to one another as possible before the experimental manipulation. And large studies are the most convincing, because random errors over the long run tend to come out in the wash. Knowing all of these things in advance helps us to design a study that allows for strong conclusions. Formal logic is also used in drawing conclusions at the end of the study (e.g., "if A or B implies that C is false, and A is true, then C is false -- no need to test for B"). This also helps to guard against error, and relies on mathematical skills that are a hallmark of the Narrative mind.

Science has a different set of problems, though, that arise after a study is complete. Once a scientist has poured time and resources into an investigation, there is a great temptation (I speak from experience here) to draw broad, sweeping conclusions about what these findings mean for life, the universe, and everything. "Conclusions that go beyond the data" are one of the most obvious mistakes a scientist can make. Fortunately we have a safeguard for this as well, which is called peer review. When scientists finish a study, they don't just speak their conclusions to the public from on high. Instead, they send the conclusions to a group of well-informed peer scientists who do their very best to tear the conclusions to shreds. Peers are often in the best place to see where the standard methods weren't followed or logical errors might have been committed (again, these are Narrative minds at work). In recent years, peer review has started even earlier with the need to register many studies on sites like clinicaltrials.gov before the first data are collected. Like pre-publication peer review, trial registration keeps scientists honest because we need to run a study in the way that we publicly declared we would run it. If we deviate from our pre-established protocol, someone is likely to notice. And ultimately, science proceeds by replication of results. The way in which we know whether a scientific finding is meaningful is whether it produces consistent results in the real world, whether in B.F. Skinner's words it allows us to predict and control the things happening around us. This is what I mean by saying that in science, reality is the arbiter of truth.

Now, one could say that scientific thinking so far is profoundly Narrative. But there is an important Intuitive-mind aspect to the peer review system. I would argue that even more than any actual errors a peer reviewer might catch, knowing that their work will be peer-reviewed is an important check that keeps scientists honest. Because I know my own study better than the peer reviewers, I might be tempted to sweep a few unexpected findings under the rug, cherry-pick my measures, or (worst of all) falsify my results. In fact, some of these things have happened, as in the Wakefield autism studies. But safeguards work: For instance, the well-established cherry-picking of measures by scientists demonstrably went down after pre-trial registration systems were implemented. Scientists may in part avoid research misconduct due to personal ethics, but we also know that if such malfeasance were discovered we would never work as scientists again! The expectation of peer review makes a scientist his or her own worst critic, because of the need to safeguard one's reputation in the field.  This is very much an Intuitive-level process, which works by keeping us up at night worrying about the integrity of our studies, not based on pure logic and reason.

One more way in which science relies on the Intuitive mind is the generation of interesting hypotheses. Professional scientists are unfortunately good at designing safe studies that test relatively low-risk questions, where the answer is likely to be what we expect and there's little chance of grant funders saying that we wasted their money on a crazy idea. But that's not necessarily the best way for science to advance. For the past 5 years I have been the parent chair for a science fair at my kids' elementary school, and I can tell you that children are absolutely the best scientists in this regard: They pose interesting questions, they are unfettered by received knowledge, and sometimes they come up with fascinating results. I remember one third-grader's graph, for instance, that showed how much longer it took to get to school on snowy days, because people drove slower on the roads even though they left earlier, and there was a larger backup at the front door just before the bell. It makes sense, when you think of it; but I wouldn't have thought of it! Psychology researcher Paul Meehl said that the best scientists "think Yiddish, [but] write British." In other words, their thinking is nonlinear, interesting, and wise, the way someone's Jewish grandmother might approach a problem. Then they take those interesting thoughts and test their ideas in a formal, systematic, and observable way. This is the scientific process in a nutshell, the marriage of offbeat ideas (Intuitive) with careful processes and formal structures for verification (Narrative). Science can be slow, boring work (think about titrating solutions in your high school chemistry class). It therefore tends to attract people who love the process and have patience for the details. And all of that careful testing is an important safeguard against error. But without the intuitive flash at the start that sets scientists looking in the right places and investigating the right things, all of those details don't get us very far.

Thomas Kuhn's Structure of Scientific Revolutions is often cited as an example of why we shouldn't trust experts, because scientific paradigms tend to become entrenched and resistant to change. Once a scientific theory becomes "received knowledge," it becomes harder to question because the same Intuitive processes that prevent fraud also make it harder to stick our necks out with a truly risky idea. But Kuhn's book also describes a process by which scientists ultimately do arrive at important new understandings. This happens not through gradual and incremental change, but through revolutionary "paradigm shifts" (the book is the source of this term) after which scientists come to understand the observable world in completely different ways. As shown in the diagram at the top of the page, scientific revolutions lead to stair-step types of growth with a major leap forward, followed by more years of elaboration and maintenance of the current understanding. This goes on until anomalies in the accepted wisdom become so prominent that they can no longer be ignored, a crisis occurs, and a new paradigm emerges. Of seven major paradigm shifts in science, the last was the development of quantum physics around the year 1900. The 1800s, by contrast, saw three major revolutions focused on thermodynamics, evolutionary biology, and electromagnetism. Far from being the arbiter of truth, today's science is probably wrong in important respects and we are due for a major shift in understanding. Current levels of doubt in science may be part of the crisis that ushers in a change.

Despite its subjectivity and tendency to defend outdated paradigms beyond their natural lifespan, science is still among the very best methods that we have for coming to the truth. Professor Harari's New York Times column is right that scientific knowledge is not opinion, even if it isn't completely rational either. The interplay of narrative and intuitive thinking helps to guard against errors: The slow and methodical Narrative approach helps us to root out common sources of falsehood that the Intuitive mind might overlook, while the fast and insightful Intuitive process keeps us honest, and also brings attention to the things that we still don't understand. Science is valuable only insofar as it works. The ultimate value of science therefore is not expertise, but truth, and the world is the arbiter of truth. As C.S. Lewis wrote, "there are a dozen views about everything until you know the answer. Then there's never more than one." Science uses the Intuitive and Narrative minds in combination to get us progressively closer to that one verifiable truth.

Comments

Popular posts from this blog

Why Does Psychotherapy Work? Look to the Intuitive Mind for Answers

  Jerome Frank's 1961 book Persuasion and Healing  popularized the idea of "common factors" that explain the benefits of psychotherapy, building on ideas that were first articulated by Saul Rosenzweig in 1936 and again by Sol Garfield in 1957. Frank's book emphasized the importance of (a) the therapeutic relationship, (b) the therapist's ability to explain the client's problems, (c) the client's expectation of change, and (d) the use of healing rituals. Later theorists emphasized other factors like feedback and empathy that are sub-components of the therapeutic relationship, and that can be clearly differentiated from specific behavior-change techniques like cognitive restructuring or behavioral reinforcement . Additional aspects of therapy that are sometimes identified as common factors include the opportunity to confront difficult past experiences, the opportunity for a "corrective emotional experience" with the therapist, and the chance t

Ethical Improvement in the New Year

  Just after the first of the year is prime time for efforts to change our behavior, whether that's joining a gym, a "dry January" break from alcohol, or going on a diet. (See my previous post about New Year's resolutions for more health behavior examples). This year I'd like to consider ethical resolutions -- ways in which we try to change our behavior or upgrade our character to live more in line with our values.  Improving ethical behavior has been historically seen as the work of philosophers, or the church. But more recent psychological approaches have tried to explain morality using some of the same theories that are commonly used to understand health behaviors based on Narrative constructs like self-efficacy, intentions, and beliefs. Gerd Gigerenzer suggests that an economic model of " satisficing " might explain moral behavior based on limited information and the desire to achieve good-enough rather than optimal results. Others have used simula

Year in Review: 2023

Here’s my annual look back at the topics that captured my attention in 2023. Over the past year I taught several undergraduate mental health classes, which is not my usual gig, although it does fit with my clinical training. The Two Minds Blog took a turn away from health psychology as a result, and veered toward traditional mental health topics instead. I had posts on   mania   and   depression .  I wrote about   loneliness   as a risk for health problems, as well as   hopefulness   as a form of stress inoculation. I wrote about the “ common factors ” in psychotherapy, which help to improve people’s mental health by way of the intuitive mind (I was particularly happy with that one). I also shared findings from a recent study where my colleagues and I implemented a   burnout prevention   program for nursing students, and another new paper that looked at the incidence of mental and physical health problems among   back country search and rescue workers . Mental health has received more