Daniel Kahneman and Amos Tversky's behavioral economics work led to a 2002 Nobel Prize based on the fundamental idea that people's decision-making is not entirely rational. Two Minds Theory builds on and expands this notion by suggesting (a) that our actual behaviors are produced by the intuitive mind rather than the narrative mind, and (b) that our narratives themselves can be flawed based on factors like moral judgments or strong desire for a particular outcome. In one sense this doesn't seem surprising, but it sometimes leads to a fundamental question about human nature: If we can't trust reason to give us the right answers, how do can we actually know anything?
The classical distinction between reason and emotion suggested that people should be guided by thoughts and not by feelings; indeed, much of our legal system is based on this distinction. Economics dating back at least as far as the utilitarianism of Jeremy Bentham and John Stuart Mill was based on the idea that people were rational actors who would make decisions based on their own self-interest; Kahneman's Nobel Prize was given for upending this assumption. In philosophy, premises based on reason have been given special privilege; this dates back to the skeptical approach of Rene Descartes and the argument that one cannot be sure of anything other than the existence of one's own thinking (cogito ergo sum). And going back even further, Aristotle defined human nature based on the "rational principle" that seems to be unique to our way of thinking. His teacher Plato went a step further, arguing that reason provides us access to the "world of forms" that is even more true, universal, and enduring than the world we can directly observe with our senses. Medieval scholastic thought used the idea of humans as "rational animals" to argue that humans have special status in the world of living things, marrying this with an idea from the Book of Genesis (1:26) that humans are divinely appointed to have dominion over the earth and all its creatures. Reason is still taken by many people as prima facie evidence that humans are special and distinct from other animals, and even among people who might reject that sense of specialness there is still a sense that one can arrive at right living through the diligent exercise of logic and reason.
If decision-making that appears to be "rational" is affected by factors that should not theoretically affect one's decisions, what does this mean for our ability to know anything at all? C. S. Lewis makes the following argument for the primacy of reason in our attempts to know the world: When someone questions your beliefs, he says, "you must ask them whether any reasoning is valid or not. If they say no, then their own doctrines, being reached by reasoning, fall to the ground. If they say yes, then they will have to examine your arguments and refute them on the merits" (Pilgrim's Regress, p. 62).
Lewis, writing in 1933, was arguing mainly with Freud's conception of the unconscious, which was among the first modern theories to point out the failings of human reason. Lewis's fellow Oxford professors in the ordinary language school of philosophy went so far as to argue that common sense should be the arbiter of all disputes, based on the idea that different people using reason independently would always come to basically the same conclusions. In the legal profession, this comes to us as the "reasonable man standard" that asks what a typical, cautious, prudent person might think in a particular situation. If we don't believe that reason is capable of producing truth, these traditional ways of resolving arguments no longer seem open to us. Even the logical positivism assumptions underlying most empirical science rely on the fact that an average well-informed scientist will come to the same interpretation if presented with the same empirical data; the arbiter of truth is in this view supposed to be one's direct observation of the world, but it ignores the possibility of people drawing different conclusions about what those observations mean.
In our own time we are seeing the ultimate failure of logical positivism as groups of people insist not only on their own opinions but also on their own carefully curated set of facts. This would seem to be the inevitable result of past findings that undercut human reason as the fundamental basis for scientific thinking and knowledge about the world. Without it we seem to be left in a zone of relativism, where anyone can label anyone else's views as "fake news" or else (in one of the most popular rhetorical moves of our current historical moment) say that they only think what they do because of who they are -- e.g., a liberal, a Republican, a minority group member, or a military veteran. Reason provides a defense against ad hominem attacks of this kind: Arguing that "you say that Black Lives Matter only because you are a Democrat" is analogous to arguing that "you say that two plus two equals four only because you are a mathematician." (Mathematics example from Lewis again). I hope we can all still agree that the second one of these statements is ludicrous, but the first may no longer seem that way to many people even thought they are fundamentally the same type of argument. Traditionally, such arguments could be handled by an appeal to logic and reason. Once we have de-throned reason, once we do not think that our own minds can be trusted to produce accurate and consistent results, how can we counter such arguments?
Discovering that reason is fallible doesn't necessarily leave us adrift in a world of relativism. Kahneman provides evidence that narrative-system thinking is still more often accurate than intuitive-system thinking when we take the time to think things through carefully. And narrative-system thinking can be further augmented by adopting formal rules like "ad hominem arguments aren't admissible" that rule out the most common strategies for undercutting Narrative logic. Of course, we all have to agree to play by the rules for this to work, which is an argument in favor of following certain structures and protocols in our public discourse. I have argued elsewhere that the great success of science is because all members of the scientific community have agreed to play by the same rules in determining what evidence is admissible or not.
Second, we can become more aware of the places where Narrative-system logic tends to break down, as in the examples from Haidt where people conclude that something is "just wrong" on moral grounds even though they can't articulate plausible logical grounds for saying so. We can commit to allowing other people to review and challenge our arguments, because they might be more able to see the flaws in our thinking than we are. (Peer review practices are another one of the great strengths of contemporary scientific approaches). We can commit not to draw conclusions until we are ourselves convinced -- Lewis argues that premature conclusions are a major source of errors in thought, and that we need to be more comfortable with uncertainty: "The people who live [in the city of Lewis's allegorical tale] have to give an opinion once a week or once a day, or else Mr. Mammon would soon cut off all their food. But out here in the country you can walk all day and all the next day with an unanswered question in your head; you need never speak until you have made up your mind" (Pilgrim's Regress, p. 61). Not drawing conclusions until we have some evidence (and perhaps taking a walk in nature while we ruminate) is an excellent practice to avoid errors in thinking. And we can adopt disciplines like routinely questioning our assumptions and seeking out contradictory information, which are not intuitively appealing but which we know are likely to help us catch some of the failures of Narrative thought.
Finally, we can take advantage of Intuitive-level processes that help us arrive at the truth. Sometimes we might come to a logical-seeming conclusion that nevertheless "feels wrong"; rather than trying to force the issue, we might take this as a prompt to continue our search for more information or flaws in our reasoning. Formal practices like the courtroom or the academic lecture hall can seem stifling and exclusionary, but they also have a purpose in making us take things seriously; former U. S. Attorney Preet Bharara suggests that the pursuit of legal justice is not purely an exercise in reason and that some of these rituals and trappings of authority are beneficial in avoiding false conclusions.
Finally, we can take advantage of Intuitive-level processes that help us arrive at the truth. Sometimes we might come to a logical-seeming conclusion that nevertheless "feels wrong"; rather than trying to force the issue, we might take this as a prompt to continue our search for more information or flaws in our reasoning. Formal practices like the courtroom or the academic lecture hall can seem stifling and exclusionary, but they also have a purpose in making us take things seriously; former U. S. Attorney Preet Bharara suggests that the pursuit of legal justice is not purely an exercise in reason and that some of these rituals and trappings of authority are beneficial in avoiding false conclusions.
Acknowledging that reason isn't infallible is not the same thing as abandoning all hope of finding out the truth. It does mean that we need to be more humble than past generations that thought reason conferred on them special status among all the animals, or that rational people had better answers than those who paid attention to things like intuition and emotion. (In fact, pure reason that ignores all emotion has its own drawbacks as a strategy for making important decisions in life). A combination of strategies drawn from both the Narrative and Intuitive systems is most likely to generate helpful solutions to complex problems, and to provide us with a view of the world closest to objective truth.
Comments
Post a Comment