Cognitive dissonance is one of the best established notions in psychology. Simply put (perhaps too simply) the idea is that people in general will go to almost any length to hold onto a cherished belief, no matter how strong the evidence against it is, and no matter how irrational the attempt to do so may seem (or actually be). In a recent posting on his blog Ben Goldacre talks about a recent article in the Journal of Applied Social Psychology that focuses on this effect in cases where subjects dismiss well-founded scientific data that contradicts their beliefs.
While reading this discussion I kept returning in memory to a session I attended at ISSA a couple of weeks ago on deep disagreement. Two of the papers presented focused extensively on strategies for resolving deep disagreements. David Zarefsky presented a battery of strategies none of which, interestingly, involved a direct attack on the belief(s) at the heart of the disagreement. Manfred Kraus’s proposal was that deep disagreement be dealt with by “anti-logical” reasoning after the fashion of the Sophist, Protagoras. I’m no expert on the Sophists but as I understood the paper Kraus seemed to be suggesting that in anti-logical reasoning it’s not so much the partisans of the contradictory views that work out their disagreement as it is the audience to the dispute, who act in the role of judge.
I’m maybe less optimistic about Kraus’s strategy than Zarefsky’s. For one thing, I think that deep disagreement is “contagious” in a way that can compromise the audience’s ability to play the role of judge. Studies in political science on the phenomenon of group polarization, for example in this important and highly argumentation-relevant paper by Cass Sunstein, have shown that polarization does tend to spread in groups and compromise decision making abilities. If one takes this research as one’s lead then it seems likely that all an anti-logical approach would do is drive audience members in the direction of their pre-deliberation tendencies (and let’s be honest, they will have such tendencies. The audience isn’t a made up individuals who are doxastic tabula rasa, after all. They are standard-issue human beings, “pre-packaged” with their own biases, prejudices, allegiances, beliefs, and leanings. It’s easy to say that “it’s all just doxa” and that the disagreement is resolved when one or the other side wins the audience’s judgment, but this doesn’t seem to me to be very helpful.
Even if it is all doxa (and I’m not at all sure that it is, but that’s a different discussion) what Kraus needs is for the audience to be, in some sense, cognitively or affectively better off than the speakers. He needs them either to be enlightened by the debate and thus in a better position to decide the question than the speakers or he needs them to be affectively neutral with respect to the contrary or contradictory positions on offer. But if Sunstein (and the growing literature on cognitive biases) is right, then audiences aren’t going to be better off than the speaker in either of these ways. Affectively, many are likely to be as committed to one or the other point of view as the speakers themselves or to have pre-existing doxastic inclinations that the speaker will be able to tap into to secure their adherence. Cognitively, the audience is subject to all the same limitations as the speakers, including the potential to have their judgments in the matter compromised by factors like limited information and cognitive biases. (I often think of the availability bias in particular as a likely factor in many cases of deep disagreement). The result, I think, would be that the disagreement would only deepen and spread as the audience members are moved themselves to more extreme views by witnessing the debate.
Zarefsky’s proposal seems better to me precisely because it doesn’t, in the main, focus on the beliefs at issue. His strategies largely work around the beliefs by, for example, shifting the frame of the discussion, or targeting the ethos of the speaker involved in the disagreement. I think this proposal is better largely because it tacitly acknowledges the reality of cognitive dissonance. For example, suppose you really believe in the Loch Ness monster and I really do not. The research on cognitive dissonance suggests that there’s very little I can do in terms of the presentation of arguments and evidence to move you off of your view. Unless you’re that perfectly reasonable Rawlsian or Habermasian agent I always hear so much about, it’s very likely that no matter what I do you’ll cling doggedly to your belief in Nessie, and support that view by finding and magnifying even the most microscopic infelicities in my arguments that Nessie is a fiction. This is what experimental subjects nearly always do in cases of cognitive dissonance. This is why it seems to me that going around the doxastic impasse, as Zarefsky’s proposed strategies do, maybe has more promise. I may not move you off of your belief in Nessie with direct arguments that she doesn’t exist, but if I can show that one of your sources of evidence is tainted in a way you recognize as disqualifying it, or if I can pull off Zarefsky’s “inter-frame borrowing” suggestion and shift the discussion from being about individual reports of monsters in lakes to a more abstract discussion about the biologically and ecologically necessary conditions for sustaining a breeding population of large marine animals, say, then it seems like I might have a shot. It may not be much of one, but it’s better than nothing. It certainly seems to have more potential than appealing to the audience, which more than likely will gravitate towards the “sexier” hypothesis–in this case, pro-lake monster.
In the main, I have to say that I’m happy to see so much focus on deep disagreement in the argumentation community. To my mind it’s one of the most important urgent problems in the field, and one with deep and interesting interconnections with lots of disciplines. I hope we move more in the direction of exploring those connections as work on the problem continues.