Archive for the ‘Rationality’ Category

"Homme assis" by Roger-Noël-François de La Fresnaye, c.1914

In a recent blog post provocatively titled “Kurt Vonnegut turns Cinderella into an Equation” Robert Krulwich (co-host of the excellent WNYC series Radiolab) uses a wonderful pair of cartoons to suggest that if humans are creatures who thrive on pattern, then scientists and mathematicians are compulsive pattern finders,  “pattern addicts” as it were.  Logicians and students of argument, I think, fairly belong in this category as well. Some of us talk about logical form and explain it in terms of complicated relationships between abstract symbols and letters. Or we classify arguments by scheme and develop equally schematic lists of questions with which to test their merits. The dialectically inclined among us give us patterns of argumentation between two or more arguers.  We create argument diagrams, relevance cubes, maps of controversies and many more things like them besides. We’re pattern people. There’s no doubt about it.

Interestingly, Krulwich closes his post by suggesting that even more than than scientists and mathematicians (and perhaps logicians and argumentation scholars too?) artists and storytellers may be even more pattern-aware. As exhibit the first he offers this short (and altogether too good not to reproduce) video of the legendary Kurt Vonnegut:

Let us begin with the obvious: we don’t need Vonnegut to tell us that stories have patterns too (though of course his way of telling us is very entertaining and we’re very lucky to have it). Clearly they’re there. The deeper issue has to do with the nature and significance of such patterns. How do we interpret them? How do we reason about them? How do we reason with them?


Read Full Post »

Scientific American: Winning Argument: As a ‘New’ Critique of Reason, Argumentative Theory Is Trite but Useful.

In recent posts here on RAIL I’ve been upfront about my tendency to like Mercier and Sperber’s work. Critical discussion of it, however, is still valuable and this short article in Scientific American by John Horgan is an accessible, if somewhat ambivalent gesture in that direction.

Read Full Post »

Apollo and the Muses by Hans Holbein the Younger, 1533

The world of those who study argument and who study reason and rationality is abuzz with talk of the provocative research of Hugo Mercier and Dan Sperber. Anyone who was at last week’s OSSA conference heard their names in practically every other conversation or presentation. For my own part I’m not sure quite what to make of their work.  On the one hand it’s exciting to see argument and reason brought together in empirical research, and I’m well on record as being very friendly to the notion that argument has a very deeply rooted functionality for human beings at both the collective and individual levels. On the other hand, I’m not sure that there aren’t grave problems lurking within. For one, Mercier and Sperber seem at times to work from the assumption that ‘argument’ means ‘deductive argument’ and if this is so, I’m not at all sure that it is wise.  The body of work on analogy alone would give me pause regarding the prospects of such a view, to say nothing of the work of the informal logic movement in the last 30 years.  There are other things that trouble me, but as I’m still doing research in this general idea I’ll try to save myself what might turn out to be a super-sized helping of crow and leave the reader to their own devices where Messrs. Mercier and Sperber are concerned.

At any rate there’s no denying it’s relevance to the world of argumentation theory.  In that vein this video interview with Hugo Mercier is one that I expect will be of interest to many.  The interview is located at the web journal* Edge, itself worth a look to those with an interest in interdisciplinary intellectual discourse.

*(All apologies to those of you who thought that by ‘Edge’ I was referring to an Irish fellow–though I confess I probably would have watched that interview with interest too.)

Read Full Post »

Cognitive dissonance is one of the best established notions in psychology.  Simply put (perhaps too simply) the idea is that people in general will go to almost any length to hold onto a cherished belief, no matter how strong the evidence against it is, and no matter how irrational the attempt to do so may seem (or actually be).   In a recent posting on his blog Ben Goldacre talks about a recent article in the Journal of Applied Social Psychology that focuses on this effect in cases where subjects dismiss well-founded scientific data that contradicts their beliefs.

While reading this discussion I kept returning in memory to a session I attended at ISSA a couple of weeks ago on deep disagreement. Two of the papers presented focused extensively on strategies for resolving deep disagreements.   David Zarefsky presented a battery of strategies none of which, interestingly, involved a direct attack on the belief(s) at the heart of the disagreement.  Manfred Kraus’s proposal was that deep disagreement be dealt with by “anti-logical” reasoning after the fashion of the Sophist, Protagoras.   I’m no expert on the Sophists but as I understood the paper Kraus seemed to be suggesting that in anti-logical reasoning it’s not so much the partisans of the contradictory views that work out their disagreement as it is the audience to the dispute, who act in the role of judge.


Read Full Post »

Upon opening my e-mail this morning I found a forward of this article from the New York Times on the popular fact-checking website snopes.com. I found the article interesting for more than a few reasons.

What has always fascinated me about Snopes is how it evolved organically online out of a felt need for objectivity. Since the beginning the web has always been a fertile breeding ground for rumors, urban legends and half-truths, and people (who I think are more sophisticated than we often believe) know this.  They are well aware of the multiple, conflicting biases that color the information they find online.  They know that these biases can lead to slanting and distortion, and to some degree they expect it.  For those who are not simply looking for confirmation of their own viewpoints, this is a problem.  Simply knowing that bias abounds on the web, however, is not a sufficient defense.  People with this kind of interest don’t want just any story, they want the story.  They want to know what really happened.  The multiple, conflicting accounts available online don’t tell them that.  The result is that people who want to use the web for information gathering purposes have to have some way of sifting the facts out of the voluminous chaff of rumor, exaggeration, and partisan cheerleading in which they lay hidden.

Enter Snopes, which as the article explains, evolved into its role as a “fact-checking” site.  (It did not start out that way.)  Nevertheless, it is now regarded by many as an authority on which stories are and are not credible on the web.

To my mind two things stand out from the article. The first is this quote:

For the Mikkelsons, the site affirms what cultural critics have bemoaned for years: the rejection of nuance and facts that run contrary to one’s point of view.

“Especially in politics, most everything has infinite shades of gray to it, but people just want things to be true or false,” Mr. Mikkelson said. “In the larger sense, it’s people wanting confirmation of their world view.”


Read Full Post »

A while ago I posted a short entry here entitled Nice Argument. I’ll Believe You When You Have a Story.  That post linked to a post about the endowment effect on Dan Ariely’s blog in behavioral economics.  In that post I wondered if something like the endowment effect (the increased perception of value that comes from association with a personal narrative) might not do some explanatory work in argumentation theory, perhaps in terms of explaining why people hold and argue for the positions that they do, or why people can be resistant to changing their minds even when presented with evidence that should do so, etc..

Here now is another entry along those lines, this time by the redoubtable popularizer of all things brain science, Jonah Lehrer.  In a recent entry on his blog Lehrer goes so far as to say that in order to be effective argumentation–especially moral argumentation–ought to be aimed at exciting the emotional systems in the brain; that argumentation that appeals to rational considerations simply won’t get the job done when it comes to morality.  Let’s see now, if he’s right then moral argument is effective when it appeals to our sentiments, but is idle when it appeals to reason.  Seems like I’ve heard that one somewhere before…I wonder if Lehrer can do a Scottish accent.

What is interesting here for argumentation theorists in these developments coming out of the social and now the hard sciences are (1) that emotions apparently play a much larger role in reasoning, and by extension in effective argumentation than has traditionally been thought and (2) that arguments or not, narratives have what increasingly looks like a proven power to convince that in some cases can exceed rational appeals.  (Of course to some in rhetoric that won’t seem like news, however, considering that this observation is coming from the hard sciences I’d wager that even the toughest rhetorician may find something to smile about there). Though obviously related, these two points each have a significance of their own. The first point is in some ways a vindication of the more nuanced view taken by most argumentation theorists of what were traditionally seen the “emotion-based” fallacies (e.g. ad misericordiam, etc.). The second point certainly seems like wind in the sails of those who favor the notion that narratives can be arguments.

Read Full Post »

Apparently the gang over at Less Wrong think so, and they’ve got a paper that backs them up.  From the blog:

Mercier and Sperber argue that, when you look at research that studies people in the appropriate settings, we turn out to be in fact quite good at reasoning when we are in the process of arguing; specifically, we demonstrate skill at producing arguments and at evaluating others’ arguments.

Interesting stuff, especially given that by ‘argument’ here Mercier and Sperber, the paper’s authors, intend the attempt to persuade, not to rationally convince.  In a nutshell, their contention is that we reason better when we are trying to persuade others to adopt our point of view. Conversely, when we aim at the truth we do worse at being reasonable.  Hmmm.  🙂

Read Full Post »

Less Wrong is a blog sponsored by Oxford University’s Future of Humanity Institute: a research group devoted mostly to issues in AI development aimed at increasing human intelligence.  While many posts center on those issues, the folks over there frequently consider ideas about rationality and reasoning.  Essentially, hardcore Bayesianism rules the roost, and there seems to be an instinctive impulse towards formalism that is perhaps not as widely shared among likely readers of RAIL.  That said, at times they hit on ideas and ways of seeing things that are fascinating and useful to consider.

One of those ideas is that of a “semantic stopsign“, the mark of which is “failure to consider the obvious next question.” As the examples make clear, the upshot of this is someone’s tendency to over-rely on a particular answer to tough questions, to rely on it as something like a conversational deus ex machina.  If, for instance, I am willing to question the ability of any institution to solve social problems but seem mysteriously unable to apply the same scrutiny to “god” or “liberal democracy” or “the free market”, then those things are, for me, semantic stopsigns.  When a chain of discursive reasoning brings me to my stopsign I simply stop asking critical questions, automatically satisfied that nothing further need be said.

Semantic stopsigns seem to me to be a familiar phenomenon, but one I’ve not seen discussed very much or labeled with that sort of precision before.  One wonders what a list of common semantic stopsigns would look like, and more importantly, what argumentative strategies one might use to circumvent them.

Read Full Post »

I found this interesting post on the twelve virtues of rationality on the blog of artificial intelligence researcher Eliezer Yudkowski.   The fifth virtue, you’ll be happy to know, is argument. 🙂

Read Full Post »

Thinking about the last post got me wondering if anyone besides myself regularly covers forms of irrationality that are studied in the social sciences in their Critical Thinking or Informal Logic classes.  It seems to me to be important for students to know about things like the endowment effect, the bandwagon effect, confirmation bias, framing problems, and groupthink (among others).  These irrational tendencies in persons and others like them certainly present obstacles to critical thinking that (we hope) can be mitigated to at least some degree by the concepts and techniques we teach.  And yet there’s not exactly a huge volume of literature bringing together critical thinking and the empirical study of phenomena like these.

What place, if any, does teaching about the empirical study of irrationality have in your overall pedagogy? Do you think it should have a place in the study of critical thinking, or should we be content to let the scientists work on it? Is it even reasonable to think that training in critical thinking help prevent these kinds of irrationality? If you do include presentations about the forms of irrationality studied by psychology, economics, &c., how do you do it?

Read Full Post »

« Newer Posts - Older Posts »

%d bloggers like this: