Productive Disagreement among Motivated Reasoners
By Dan Hicks
In part I of this series, I discussed motivated reasoning, reasoning in which emotions or values play a significant role. I looked at the work of Dan Kahan, which suggests that motivated reasoning is pervasive. This, in turn, suggested the gloomy thought that bad reasoning is pervasive. But, in part II, I argued that motivated reasoning is not necessarily bad reasoning. To make this case, I looked at two models of motivated reasoning (or, values in science) from STS scholar Daniel Sarewitz and philosopher of science Heather Douglas.
At this point, another gloomy thought seems to threaten. Motivated reasoning might not be bad reasoning, but it can still lead to intractable disagreement. If I – based in part on my values – accept the hypothesis that humans are causing climate change, and you -based in part on your values – reject this hypothesis, then how are we supposed to move forward? Now we recognize the role that emotions and values are playing in the disagreement; but disagreements over emotions and values seem to be as intractable as anything. In this final post in this series, I’ll address this gloomy thought.
Disagreement – even intractable disagreement – is not necessarily a bad thing. Consider a simple scientific debate: A scientist says that some body of evidence supports her hypothesis. A critic points out, say, that there’s a crucial assumption linking the evidence and the hypothesis, and that the scientist hasn’t given any reason whatsoever to believe this assumption. The observers (other scientists) nod their heads, and ask the scientist why they should accept her assumption. The scientist conducts more research, and eventually produces an argument in favor of the assumption. The critic still isn’t satisfied, but the observers generally find the scientist’s argument convincing, and so they accept her hypothesis.1
Now, in this simple scenario, the disagreement was intractable: the scientist never convinced the critic. But the disagreement was also quite productive. Without the critic, neither the scientist nor the observers would have realized that the hypothesis depended on the crucial assumption. Thanks to the critic, the scientist and the observers have a better understanding of the relationship between the evidence and the hypothesis, a better understanding of what all they’re accepting when they accept the hypothesis, and a better understanding of the ways in which the hypothesis could be undermined in the future.
Next, suppose that the critic was critical because (and only because) of motivated reasoning – his values conflicted with the hypothesis, and so he made a special effort to identify the crucial assumption. Then motivated reasoning made for good reasoning and a productive disagreement. Indeed, without his motivated reasoning, it seems, no-one would have bothered to identify the crucial assumption.
But now suppose that the critic – who, remember, never accepted the hypothesis – refuses to go quietly. Maybe, despite the fact that everyone else thinks the scientist has answered his criticism, he keeps making the same point over and over again, in the published scientific literature, then on his blog, and eventually on, say, Fox News. Suppose that the hypothesis has some policy implications that certain powerful lobbyists and politicians don’t like. They promote the critic’s criticisms, despite the fact that every other scientist in the field accepts the hypothesis. Let’s call this the celebrity skeptic scenario.
At this point, we should worry that things could have gone wrong. I don’t want to assume that they have actually gone wrong.2 Instead, I want to ask: how could we tell whether they’ve gone wrong?
In part II, my argument that motivated reasoning can be good reasoning relied on this claim: “If motivated reasoning leads us to recognize when our best science is ambiguous and uncertain, and we respond to this ambiguity and uncertainty properly, then our reasoning can be good.” This suggests two ways in which motivated reasoning can be bad reasoning:
- The science is no longer either ambiguous or uncertain.
- The science is still ambiguous or uncertain, but we’re not responding to it properly.
How would these play out in the celebrity skeptic scenario? First, the scientist might respond to the critic by eliminating all of the ambiguities and providing overwhelmingly evidence. She might show that, whatever set of choices we make, we get to the same conclusion; and that there is no reason whatsoever to think that the hypothesis is wrong.
In this case, it seems that the critic is rejecting the hypothesis only because of his values; or that his values are simply overriding good evidence and reasoning, which clearly and unambiguously are pointing towards the hypothesis. To put it another way, the two places where Sarewitz and Douglas find room for values – ambiguity and uncertainty – have been filled in. So the critic’s motivated reasoning is, in this case, badreasoning.
I recognize that this scenario is a possibility. But I think eliminating all ambiguity and uncertainty will be practically impossible in many real-world cases. Consider what it would take to completely eliminate the ambiguities and uncertainties about climate change that we saw in part II.
Second, suppose that the scientist hasn’t eliminated all the ambiguity and uncertainty. But she fails to acknowledge this, and acts as though the science were unambiguous and certain. She could even claim that the critic is rejecting the hypothesis only because of his values. Or, in the other extreme, the critic exaggerates the ambiguity and uncertainty, and even claims that the scientist has accepted the hypothesis only because of her values. Indeed, perhaps both things happen, and each side accuses the other of thoroughgoing irrationality.
In these cases, the science is ambiguous and uncertain, but the participants in the debate are handling it badly. They are misrepresenting the extent of the ambiguity and uncertainty, and using this to mischaracterize their opponent in an effort to discredit him or her.
I think that this kind of scenario is all too common in our society. Our scientific findings are frequently ambiguous and uncertain, and we fail to handle this properly. Instead, we wield the unattainable ideal of unambiguity and certainty as a rhetorical weapon (recall the discussion of “settled science” in part II).
What would it look like if we handled ambiguity and uncertainty properly? Suppose the scientist – and the other members of the near-consensus – respond to the critic by saying something like this:
Yes, there are remaining uncertainties and ambiguities. We believe that these uncertainties and ambiguities have been sufficiently addressed, though we recognize that the critic doesn’t agree. We recognize that values and emotions play an important role in this disagreement.
More research could further address these issues. However, we do not think that any amount of research could address them completely. Our understanding of complex systems like these can never be totally unambiguous and totally certain.
In any case, if our hypothesis is correct, then we must take action now to prevent serious problems. We hope that policymakers can find policies that are acceptable to both believers and skeptics – that prevent the problems without creating other ones. But this is not something science can do.
There are still ambiguities and uncertainties. But the scientists acknowledge those ambiguities and uncertainties – they don’t try to pretend that they don’t exist – and they acknowledge that motivated reasoning is playing a role on both sides of the disagreement. This validates the critic’s criticisms, without completely conceding them.
Furthermore – and this part is even more important – the scientists recognize that disagreements about values are best resolved by a political process, not a scientific one. Science – at least, not ambiguous and uncertain science, which is nearly all science – can’t compel us to adopt a certain policy on the basis of pure logic and evidence. Indeed, wielding logic and evidence as a political weapon will probably just make things worse. Handling ambiguity and uncertainty well – making disagreement productive – requires recognizing where science leaves off and politics must take over.
Specifically, I don’t want to conclude that things have gone wrong just because there’s a near-consensus and a single holdout. There have been plenty of episodes in the history of science in which the near-consensus position turned out to be wrong, and the critics (who turned out to be right) were ignored because they weren’t men, weren’t white, didn’t have positions at prestigious universities, didn’t have the support of wealthy and powerful people, and so on. ↩