HOW’S this for a cynical view of science? “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”
Scientific truth, according to this view, is established less by the noble use of reason than by the stubborn exertion of will. One hopes that the Nobel Prize-winning physicist Max Planck, the author of the quotation above, was writing in an unusually dark moment.
And yet a large body of psychological data supports Planck’s view: we humans quickly develop an irrational loyalty to our beliefs, and work hard to find evidence that supports those opinions and to discredit, discount or avoid information that does not. In a classic psychology experiment, people for and against the death penalty were asked to evaluate the different research designs of two studies of its deterrent effect on crime. One study showed that the death penalty was an effective deterrent; the other showed that it was not. Which of the two research designs the participants deemed the most scientifically valid depended mostly on whether the study supported their views on the death penalty.
In the laboratory, this is labeled confirmation bias; observed in the real world, it’s known as pigheadedness.
Scientists are not immune. In another experiment, psychologists were asked to review a paper submitted for journal publication in their field. They rated the paper’s methodology, data presentation and scientific contribution significantly more favorably when the paper happened to offer results consistent with their own theoretical stance. Identical research methods prompted a very different response in those whose scientific opinion was challenged.
This is a worry. Doesn’t the ideal of scientific reasoning call for pure, dispassionate curiosity? Doesn’t it positively shun the ego-driven desire to prevail over our critics and the prejudicial urge to support our social values (like opposition to the death penalty)?
Perhaps not. Some academics have recently suggested that a scientist’s pigheadedness and social prejudices can peacefully coexist with — and may even facilitate — the pursuit of scientific knowledge.
Let’s take pigheadedness first. In a much discussed article this year in Behavioral and Brain Sciences, the cognitive scientists Hugo Mercier and Dan Sperber argue that our reasoning skills are really not as dismal as they seem. They don’t deny that irrationalities like the confirmation bias are common. Instead, they suggest that we stop thinking of the primary function of reasoning as being to improve knowledge and make better decisions. Reasoning, they claim, is for winning arguments. And an irrational tendency like pigheadedness can be quite an asset in an argumentative context. A engages with B and proposes X. B disagrees and counters with Y. Reverse roles, repeat as desired — and what in the old days we might have mistaken for an exercise in stubbornness turns out instead to be a highly efficient “division of cognitive labor” with A specializing in the pros, B in the cons.
It’s salvation of a kind: our apparently irrational quirks start to make sense when we think of reasoning as serving the purpose of persuading others to accept our point of view. And by way of positive side effect, these heated social interactions, when they occur within a scientific community, can lead to the discovery of the truth.
And what about scientists’ prejudices? Clearly, social values should never count as evidence for or against a particular hypothesis — abhorrence of the death penalty does not count as data against its crime-deterrent effects. However, the philosopher of science Heather Douglas has argued that social values can safely play an indirect role in scientific reasoning. Consider: The greater we judge the social costs of a potential scientific error, the higher the standard of evidence we will demand. Professor A, for example, may be troubled by the thought of an incorrect discovery that current levels of a carcinogen in the water are safe, fearing the “discovery” will cost lives. But Professor B may be more anxious about the possibility of an erroneous conclusion that levels are unsafe, which would lead to public panic and expensive and unnecessary regulation.
Both professors may scrutinize a research paper with these different costs of error implicitly in mind. If the paper looked at cancer rates in rats, did the criteria it used to identify the presence of cancer favor over- or under-diagnosis? Did the paper assume a threshold of exposure below which there is no cause for concern, or did it assume that any level of exposure increases risk? Deciding which are the “better” criteria or the “better” background assumptions is not, Ms. Douglas argues, solely a scientific issue. It also depends on the social values you bring to bear on the research. So when Professor A concludes that a research study is excellent, while Professor B declares it seriously mistaken, it may be that neither is irrationally inflating or discounting the strength of the evidence; rather, each is tending to a different social concern.
Science often makes important contributions to debates that involve clashes of social values, like the protection of public health versus the protection of private industry from overregulation. Yet Ms. Douglas suggests that, with social values denied any legitimate role in scientific reasoning, “debates often dance around these issues, attempting to hide them behind debates about the interpretation of data.” Professors A and B are left with no other option but to conclude that the other is a stubborn, pigheaded excuse for a scientist.
For all its imperfections, science continues to be a stunning success. Yet maybe progress would be even faster and smoother if scientists would admit, and even embrace, their humanity.
A must cite! Exactly what I have argued in my paper!