Why facts don’t change minds
(Mostly not my writing, two interviews full of interesting evidence about our structural intransigence)
Hugo Mercier and Dan Sperber are the authors of “The Enigma of Reason,” a new book from Harvard University Press. Their arguments about human reasoning have potentially profound implications for how we understand the ways human beings think and argue, and for the social sciences.
“Even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,” the researchers noted. In this case, the failure was “particularly impressive,” since two data points would never have been enough information to generalize from…
If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hyper-sociability.”
Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own…”
“…Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.
“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.
In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.”
Elizabeth Kolbert – Why Facts Don’t Change Our Minds – The New Yorker February 27, 2017, issue.
Henry Farrell: So, many people think of reasoning as a faculty for achieving better knowledge and making better decisions. You disagree. Why is the standard account of reasoning implausible?
HM: By and large, reasoning doesn’t fulfill this function very well. In many experiments — and countless real-life examples — reasoning does not drive people towards better knowledge or decisions. If people start out with the wrong intuitive idea, and then start reasoning, it rarely does them any good. They’re stuck on their initial wrong idea.
What makes reasoning fail is even more damning. Reasoning fails because it has a so-called ‘myside bias.’ This is what psychologists often call confirmation bias — that people mostly reason to find arguments that whatever they were already thinking is a good idea. Given this bias, it’s not surprising that people typically get stuck on their initial idea.
More or less everybody takes the existence of the myside bias for granted. Few readers will be surprised that it exists. And yet it should be deeply puzzling. Objectively, a reasoning mechanism that aims at sounder knowledge and better decisions should focus on reasons why we might be wrong and reasons why other options than our initial hunch might be correct. Such a mechanism should also critically evaluate whether the reasons supporting our initial hunch are strong. But reasoning does the opposite. It mostly looks for reasons that support our initial hunches and deems even weak, superficial reasons to be sufficient.
HF: So why did the capacity to reason evolve among human beings?
HM: We suggest that the capacity to reason evolved because it serves two main functions:
The first is to help people solve disagreements. Compared to other primates, humans cooperate a lot, and they evolved abilities to communicate in order to make cooperation more efficient. However, communication is a risky business: There’s always a risk that one might be lied to, manipulated or cheated. Hence, we carefully evaluate what people tell us. Indeed, we even tend to be overly cautious, rejecting messages that don’t fit well with our preconceptions.
Reasoning would have evolved in part to help us overcome these limitations and to make communication more powerful. Thanks to reasoning, we can try to convince others of things they would never have accepted purely on trust. And those who receive the arguments benefit by being given a much better way of deciding whether they should change their mind or not.
The second function is related but still distinct: It is to exchange justifications. Another consequence of human cooperativeness is that we care a lot about whether other people are competent and moral: We constantly evaluate others to see who would make the best cooperators. Unfortunately, evaluating others is tricky, since it can be very difficult to understand why people do the things they do. If you see your colleague George being rude with a waiter, do you infer that he’s generally rude, or that the waiter somehow deserved his treatment? In this situation, you have an interest in assessing George accurately and George has an interest in being seen positively. If George can’t explain his behavior, it will be very difficult for you to know how to interpret it, and you might be inclined to be uncharitable. But if George can give you a good reason to explain his rudeness, then you’re both better off: You judge him more accurately, and he maintains his reputation.
If we couldn’t attempt to justify our behavior to others and convince them when they disagree with us, our social lives would be immensely poorer and more complicated.
HF: So, if reasoning is mostly about finding arguments for whatever we were thinking in the first place, how can it be useful?
HM: Because this is only one aspect of reasoning: the production of reasons and arguments. Reasoning has another aspect, which comes into play when we evaluate other people’s arguments. When we do this, we are, on the whole, both objective and demanding. We are demanding in that we require the arguments to be strong before changing our minds — this makes obvious sense. But we are also objective: If we encounter a good argument that challenges our beliefs, we will take it into account. In most cases, we will change our mind — even if only by a little.
This might come as a surprise to those who have heard of phenomena like the “backfire effect,” under which people react to contrary arguments by becoming even more entrenched in their views. In fact, backfire effects seem to be extremely rare. In most cases, people change their minds — sometimes a little bit, sometimes completely — when exposed to challenging but strong arguments.
When we consider these two aspects of reasoning together, it is obvious why it is useful. Reasoning allows people who disagree to exchange arguments with each other, so they are in a better position to figure out who’s right. Thanks to reasoning, both those who offer arguments (and, hence, are more likely to get their message across) — and those who receive arguments (and, hence, are more likely to change their mind for the better) — stand to win. Without reasoning, disagreements would be immensely harder to resolve.
HF: Despite reason’s flaws, your book argues that it “in the right interactive context, works.” How can group interaction harness reason for beneficial ends?
HM: Reasoning should work best when a small number of people (fewer than six, say) who disagree about a particular point but share some overarching goal engage in discussion.
Group size matters for two reasons. Larger groups are less conducive to efficient argumentation because the normal back and forth of discussion breaks down when you have more than about five people talking together. You’ll see that at dinner parties: Four or five people can have a conversation, but larger groups either split into smaller ones, or end up in a succession of short “speeches.” On the other hand, smaller groups will necessarily encompass fewer ideas and points of view, lowering both the odds of disagreement and the richness of the discussion.
Disagreement is crucial because if people all agree and yet exchange arguments on a given topic, arguments supporting the consensus will pile up, and the group members are likely to become even more entrenched in their acceptation of the consensual view.
Finally, there has to be some commonality of interest among the group members. You’re not going to convince your fellow poker player to fold when she has a straight flush. However, it’s often relatively easy to find such a commonality of interest. For example, we all stand to gain from having more accurate beliefs.
This article is one in a series supported by the MacArthur Foundation Research Network on Opening Governance that seeks to work collaboratively to increase our understanding of how to design more effective and legitimate democratic institutions using new technologies and new methods. Neither the MacArthur Foundation nor the Network is responsible for the article’s specific content. Other posts in the series can be found here.
Henry Farrell, washingtonpost.com © 1996-2020 The Washington Post