Alternatives to Antagonism: Ambiguity and Uncertainty

By Dan Hicks

This is part II of a three-part series. This series will be posted simultaneously on Je Fais, Donc Je Suis, my personal blog, as well as the Rotman Institute Blog.

In the first part of this post, I discussed the work of social psychologist Dan Kahan on motivated reasoning. As he defines it, motivated reasoning is “the unconscious tendency of individuals to process information in a manner that suits some end or goal extrinsic to the formation of accurate beliefs.” According to what I called the antagonistic picture, motivated reasoning is bad reasoning; it leads us to have false or unjustified beliefs. And Kahan’s work shows that motivated reasoning is pervasive; specifically, I discussed some work that shows that high science literacy and numeracy seems to exacerbate, not remove, motivated reasoning.

All together, this leads us to a gloomy conclusion. But, in this post, I’ll argue that things aren’t necessarily so gloomy. Specifically, I’ll argue that motivated reasoning isn’t necessarily bad reasoning. I’ll do this by first thinking a bit more about why we expected high science literacy and numeracy to lead to agreement, then introducing two models of motivated reasoning, one from STS scholar Daniel Sarewitz and one from philosopher of science Heather Douglas.1

In the first part of the post, we saw that science literacy and numeracy seem to increase disagreement, at least about climate change. This was exactly the opposite of what we had predicted, namely, that science literacy and numeracy would decrease disagreement, and it led to our gloomy conclusion that we are doomed to bad, motivated reasoning. But why did we expect science literacy and numeracy to have this effect? In other words, why did we expect highly science literate and numerate people to agree on what the evidence says about climate change?

Part of the answer, I think, is that we assumed that the reasoning involved  going from some evidence to accepting or rejecting a hypothesis  is unambiguous and certain. In other words, given the available evidence, it is clear whether the hypothesis should be accepted or rejected; and there is no reason to think that we could be wrong to accept or reject the hypothesis.

If the reasoning involved in, say, assessing the risks of climate change really is unambiguous and beyond reasonable doubt, then we would expect good reasoners to agree. But if one or the other of these assumptions is false, then the door is open for good reasoners to disagree.

Sarewitz and Douglas, respectively, start their analyses by rejecting these assumptions. Sarewitz points out that scientific evidence is often quite ambiguous, and Dougas starts by recognizing that inductive inferences can never be certain. In different ways, each goes on to argue that values have a role to play in recognizing these ambiguities and uncertainties.

But that means that motivated reasoning can be good reasoning. If motivated reasoning leads us to recognize when our best science is ambiguous and uncertain, and we respond to this ambiguity and uncertainty properly, then our reasoning can be good. Indeed, in this kind of case, if non-motivated reasoning would have led us to assume (incorrectly) that our findings are unambiguous and certain, then motivated reasoning would be better than non-motivated reasoning. (We’ll take a closer look at this possibility in part III.)

Let’s turn now to Sarewitz and Douglas for a little more detail. I’m going to stick with the example of climate change to illustrate things.

The computer simulations we use to study the global climate are enormously complex; arguably, some of them are the most complex things that human beings have ever created. But even these extraordinarily complex systems involve significant simplifications and approximations in the ways they represent the global climate. Choices have to be made about which parts of the system will be modeled in which ways, and which parts will be left out entirely. When we move from modeling the climate itself to modeling the social and economic effects of climate change, the choices ramify.

Consequently, Sarewitz argues,

nature itself — the reality out there — is sufficiently rich and complex to support a science enterprise of enormous methodological, disciplinary, and institutional diversity. I will argue that science, in doing its job well, presents this richness, through a proliferation of facts assembled via a variety of disciplinary lenses, in ways that can legitimately support, and are causally indistinguishable from, a range of competing, value-based political positions.

In other words, choices are unavoidable; “when cause-and-effect relations are not simple or well-established, all uses of facts are selective.” Then, once we see where a certain set of choices is taking us, we seem to be free to endorse those choices — if they agree with our values — or call them into question — if they don’t.2 Specifically, once we see the implications of the choices made by climate scientists, liberals are free to endorse those choices and conservatives are free to call them into question.

This doesn’t mean that all sets of choices are equally good. Rather, Sarewitz’ starting point is that no one set of choices is unambiguously the best. At this point, motivated reasoning can lead us go with one set rather than another, without our reasoning being flawed in any way whatsoever. Indeed, motivated reasoning can help us recognize that someone else’s findings depend on choices that they have made unconsciously.

Douglas’ model is built on the idea of inductive risk. When we accept or reject a general hypothesis or a prediction about the future based on limited evidence, there’s always a possibility that we’ve gotten things wrong — that our sample wasn’t representative of the whole population, that some unanticipated factor changed the way things turned out. Douglas points out that getting things wrong in this way can have negative downstream consequences. For example, if we accept the hypothesis that climate change will cause massive population displacements (due to sea level rise and desertification), make serious economic sacrifices to try to forestall these displacements, and then it turns out that the hypothesis was wrong, then our serious economic sacrifices were unnecessary. Similarly, if we reject this hypothesis, do nothing to forestall the displacements, and it turns out that we’re wrong, then we’ll have massive population displacements on our hands.

The values that we attach to the downstream consequences of a hypothesis can and should play a role in determining how much evidence we need to accept or reject the hypothesis. If the consequences of incorrectly accepting the hypothesis are relatively minor, then we should be satisfied with relatively little and weak evidence. But if the consequences are relatively major, then we should demand much more and more stringent evidence.

Because of this, when everyone can agree on the values of the various consequences, we can expect agreement on how much evidence is required to accept or reject the hypothesis, and so we can expect everyone to act the same way (that is, everyone accepts it or everyone rejects it). On the other hand, when people don’t agree on the values at stake, we expect disagreement about whether we have enough evidence.

This could help explain why climate change is politically polarized. Liberals generally think the economic consequences of doing something about climate change will be minor, while the social and ecological consequences of not doing something will be major. Conservatives (at least pro-capitalist conservatives) generally think exactly the opposite: the economic consequences are major and the social and ecological consequences are minor. So liberals are satisfied with the available evidence concerning climate change and conservatives want more and better evidence.

In this explanation of the controversy, both sides are using motivated reasoning. Indeed, on Douglas’ model, motivated reasoning is absolutely necessary. Without motivated reasoning – without taking into account the significance of the consequences – we have no way to make a non-arbitrary decision about whether we have enough evidence to accept the hypothesis. Good reasoning requires emotions and values.3

If this explanation is right, then the controversy over whether “the science is settled” (about climate change) is disingenuous, in two ways. First, we can never be certain about climate change, and in this sense the science can never be “settled.” It’s disingenuous for conservatives to demand this, and likewise disingenuous for liberals to claim that it has been achieved. The controversy is really over whether the evidence is sufficient to accept the key claims about climate change (humans are responsible, it will have specific bad consequences, and so on). But even this is disingenuous, because liberals and conservatives are working with different standards of sufficient evidence. Due to motivated reasoning, the evidence can be both sufficient for liberals and at the same time insufficient for conservatives.

So motivated reasoning is not necessarily bad reasoning. Because of ambiguity and uncertainty, emotions and values have a role to play in our reasoning. But does this mean that “reasoning” degenerates into an anything-goes free-for-all? No. That will be the topic of part III.

 

  1. To be precise, Douglas and Sarewitz write more about “value-freedom” and “objectivity” than “motivated reasoning.” But they’re closely connected. In my research, I define value-freedom as the normative ideal or principle that ethical and political values should not play a role in accepting (or rejecting) a claim. Value-freedom is one way (but only one way) of understanding objectivity. Motivated reasoning — and cultural cognition specifically — often seems to violate value-freedom. Now, Douglas and Sarewitz are both arguing that non-value-free science can still be good science. If they’re right, then in the same way motivated reasoning can still be good reasoning. ↩
  2. “We are free to do X” is shorthand here for something like “good reasoning does not require that we do not-X.” ↩
  3. We might worry that, say, conservatives are putting too much weight on the economic consequences, and not enough on the social and ecological consequences. That is, we might worry that conservatives are working with wrong or unreasonable values. I think a weakness of both of these models is that they treat values as exogenous – values just sort of come in from beyond the scope of rational debate and disagreement. ↩