The Pervasiveness of Motivated Reasoning

By Dan Hicks

This is part I of a three-part series. This series will be posted simultaneously on Je Fais, Donc Je Suis, my personal blog, as well as the Rotman Institute Blog.

Social and political values predict your views on climate change: if you’re an egalitarian-communitarian (think: liberal, on the political left), chances are you think humans are responsible for climate change; if you’re a hierarchical-individualist (think: conservative, on the political right), chances are you think climate change is a natural phenomenon, or isn’t happening at all.

[[MORE]]

Social psychologist Dan Kahan argues that this is due to motivated reasoning, “the unconscious tendency of individuals to process information in a manner that suits some end or goal extrinsic to the formation of accurate beliefs.” Specifically, in the case of climate change (though not in the case of vaccines or genetically modified foods), Kahan argues that cultural cognition is at work: you accept or reject the belief that humans are responsible for climate change because you identify yourself as a member of a group (“liberals,” “conservatives”) that is committed to accepting or rejecting this belief. In other words, you believe humans are responsible for climate change because you’re a liberal and liberals believe humans are responsible for climate change.

Values and good reasoning are often assumed to be antagonistic. There’s a metaphor that goes back to Plato: we’re in a chariot, being pulled by two horses, reason and emotion. Reason tries to pull us towards truth; but emotion pulls us away from truth. If emotion isn’t restrained, it will ride roughshod over reason and truth. (That last bit muddles the metaphor, but you get the idea.) Kahan’s defintion of motivated reasoning seems to suggest this antagonism. The end or goal is, as he puts it, “extrinsic to accurate belief”; it’s external to, irrelevant to, perhaps even opposed to the truth.

On this antagonistic picture, motivated reasoning seems to be bad reasoning. Consider two cases: motivated reasoning leads us to accept a false claim; or it leads us to accept a true claim. In the first case, things have clearly gone wrong: we believe something that’s false. In the second case, we’ve gotten to the right conclusion (we accept a true claim), but in the wrong way (following emotion and values rather than evidence and logic). In philosophy-ese, motivated reasoning seems to lead to beliefs that are falseunjustified, or both.

Working within this antagonistic picture, you might think that we can avoid motivated reasoning by improving science literacy – how much people know about science  and numeracy – “not just mathematical ability but also [the] disposition to engage quantitative information in a reflective and systematic way and use it to support valid inferences” (6). To go with Plato’s metaphor: by making the reason horse strong and powerful, we will move towards truth, whichever direction the emotion horse happens to want to go. We will tend to get justified true beliefs by overwhelming the influence of emotion or values.

Kahan’s work suggests that this isn’t the case. In one line of research, he divides people into two groups: high science literacy/numeracy [high SLN] and low science literacy/numeracy [low SLN]. The antagonistic picture suggests that (a) people in high SLN group will tend to agree with each other — they’re all being moved towards truth by a relatively strong reason horse — while (b) people in the low SLN group will tend to disagree with each other — they’re being moved in all different directions by a relatively strong emotion horse.

This prediction gets things exactly backwards: polarization increases with science comprehension. Consider this image, from Kahan’s blog:
polarization increases with science comprehension

On the left is the prediction: as SLN increase (as we move from left to right in the graph) the two groups converge. On the right is actual survey data: as SLN increases, egalitarian-communitarians (liberals) becomemore worried about climate change while hierarchical-individualists (conservatives) become less worried. The two groups move further apart, not together!

These results suggest that motivated reasoning is pervasive. High science literacy and numeracy don’t help; indeed, they just seem to make things worse. In terms of Plato’s metaphor, it seems that we don’t have two horses, reason and emotion. It’s more like reasoning is the horse pulling the chariot, but emotion is the charioteer, the one who ultimately decides which direction reason is going to go. Kahan puts it less metaphorically:

When the data, properly construed, supported an ideological noncongenial result, high numerate subjects latched onto the incorrect but ideologically satisfying heuristic alternative to the logical analysis required to solve the problem correctly.

So it seems that we’re doomed to bad reasoning. Motivated reasoning leads us to false or unjustified beliefs, and motivated reasoning is pervasive.

I don’t think this is necessarily the case. Specifically, I don’t think that motivated reasoning necessarily leads us to false or unjustified beliefs. Certainly they do sometimes. But not in all cases. In other words, the antagonistic picture is wrong. And that’s what I’m going to argue in part II of this post.