Feeds:
Posts
Comments

Archive for the ‘Connections’ Category

Many of us teach service courses called “Critical Thinking” in our colleges and universities.  Exactly what ‘critical thinking’ means, however, is and has been the source of much vexation.  Reading this blog post by neuroscience researcher and popularizer Jonah Lehrer put me in mind of a discussion I’ve sometimes heard bits and pieces of in this context: on whether and how critical thinking bears any relation to creative thinking.

Broadly speaking I’d suppose that most people understand critical thinking as a br0adly analytical enterprise.  Whether one is extracting an argument for evaluation, analyzing a discussion according to pragma-dialectic rules or critiquing a speech according to rhetorical canons of interpretation, the effort seems to be one in which the task is to “look underneath” the surface phenomenon of the linguistic artifact (the argument or dialogue as it is found “naturally”, in its own discursive “habitat”, say) to structural, prescriptive, and other such properties.   Creative thinking seems less to be about analyzing images or bits of text, and more about the realization of hitherto un-thought-of possibilities that arise from them, or perhaps about the ability to associate freely between different sorts of families of word or image.

It would be easy to pigeonhole critical thinking and creative thinking into wholly different mindsets by saying that critical thinking is about analysis and creative thinking is about expression, but I think this would be misleading.  Critical thinkers learn to prize clarity of expression and to be clear when the occasion requires it.  Creative thinkers also engage in analysis, for example, in the visual analysis of whether a composition or a choice of color is apt given what the artist is trying to express.  Despite the apparent differences, I’m inclined to think that creative and critical thinking aren’t wholly disparate.  Important to both, for example, is the ability to resist framing problems and other dynamics that artificially close off avenues of interpretation or understanding.  Both, I think, also require the development of character traits like intellectual independence. Certainly neither is possible without a good deal of open-mindedness. Freedom of thought and expression seems essential for developing both sets of skills too.

This is not to say that we can collapse the two.  I don’t think we can or should.  I do think, however, that it might be interesting from a pedagogical point of view to consider what critical thinking would look like if taught from a creative perspective, and vice versa.  What kind of classroom environment would best combine both?  What skills, ideally, would the student leave such a class with that he or she doesn’t leave a critical thinking class with now?

Though I am here thinking mostly of pedagogical concerns, I can’t help but wonder if thinking along these lines might not be helpful in sorting out the relationship between rhetoric and argumentation too.

Read Full Post »

Upon opening my e-mail this morning I found a forward of this article from the New York Times on the popular fact-checking website snopes.com. I found the article interesting for more than a few reasons.

What has always fascinated me about Snopes is how it evolved organically online out of a felt need for objectivity. Since the beginning the web has always been a fertile breeding ground for rumors, urban legends and half-truths, and people (who I think are more sophisticated than we often believe) know this.  They are well aware of the multiple, conflicting biases that color the information they find online.  They know that these biases can lead to slanting and distortion, and to some degree they expect it.  For those who are not simply looking for confirmation of their own viewpoints, this is a problem.  Simply knowing that bias abounds on the web, however, is not a sufficient defense.  People with this kind of interest don’t want just any story, they want the story.  They want to know what really happened.  The multiple, conflicting accounts available online don’t tell them that.  The result is that people who want to use the web for information gathering purposes have to have some way of sifting the facts out of the voluminous chaff of rumor, exaggeration, and partisan cheerleading in which they lay hidden.

Enter Snopes, which as the article explains, evolved into its role as a “fact-checking” site.  (It did not start out that way.)  Nevertheless, it is now regarded by many as an authority on which stories are and are not credible on the web.

To my mind two things stand out from the article. The first is this quote:

For the Mikkelsons, the site affirms what cultural critics have bemoaned for years: the rejection of nuance and facts that run contrary to one’s point of view.

“Especially in politics, most everything has infinite shades of gray to it, but people just want things to be true or false,” Mr. Mikkelson said. “In the larger sense, it’s people wanting confirmation of their world view.”

(more…)

Read Full Post »

A while ago I posted a short entry here entitled Nice Argument. I’ll Believe You When You Have a Story.  That post linked to a post about the endowment effect on Dan Ariely’s blog in behavioral economics.  In that post I wondered if something like the endowment effect (the increased perception of value that comes from association with a personal narrative) might not do some explanatory work in argumentation theory, perhaps in terms of explaining why people hold and argue for the positions that they do, or why people can be resistant to changing their minds even when presented with evidence that should do so, etc..

Here now is another entry along those lines, this time by the redoubtable popularizer of all things brain science, Jonah Lehrer.  In a recent entry on his blog Lehrer goes so far as to say that in order to be effective argumentation–especially moral argumentation–ought to be aimed at exciting the emotional systems in the brain; that argumentation that appeals to rational considerations simply won’t get the job done when it comes to morality.  Let’s see now, if he’s right then moral argument is effective when it appeals to our sentiments, but is idle when it appeals to reason.  Seems like I’ve heard that one somewhere before…I wonder if Lehrer can do a Scottish accent.

What is interesting here for argumentation theorists in these developments coming out of the social and now the hard sciences are (1) that emotions apparently play a much larger role in reasoning, and by extension in effective argumentation than has traditionally been thought and (2) that arguments or not, narratives have what increasingly looks like a proven power to convince that in some cases can exceed rational appeals.  (Of course to some in rhetoric that won’t seem like news, however, considering that this observation is coming from the hard sciences I’d wager that even the toughest rhetorician may find something to smile about there). Though obviously related, these two points each have a significance of their own. The first point is in some ways a vindication of the more nuanced view taken by most argumentation theorists of what were traditionally seen the “emotion-based” fallacies (e.g. ad misericordiam, etc.). The second point certainly seems like wind in the sails of those who favor the notion that narratives can be arguments.

Read Full Post »

Apparently the gang over at Less Wrong think so, and they’ve got a paper that backs them up.  From the blog:

Mercier and Sperber argue that, when you look at research that studies people in the appropriate settings, we turn out to be in fact quite good at reasoning when we are in the process of arguing; specifically, we demonstrate skill at producing arguments and at evaluating others’ arguments.

Interesting stuff, especially given that by ‘argument’ here Mercier and Sperber, the paper’s authors, intend the attempt to persuade, not to rationally convince.  In a nutshell, their contention is that we reason better when we are trying to persuade others to adopt our point of view. Conversely, when we aim at the truth we do worse at being reasonable.  Hmmm.  🙂

Read Full Post »

Clicking here will take you to an interview with Frans van Eemeren, where he covers a number of topics regarding the applicability and usefulness of the pragma-dialectic method for folks working in the social sciences.  Though the blog is in French, the interview is in English.

Read Full Post »

Less Wrong is a blog sponsored by Oxford University’s Future of Humanity Institute: a research group devoted mostly to issues in AI development aimed at increasing human intelligence.  While many posts center on those issues, the folks over there frequently consider ideas about rationality and reasoning.  Essentially, hardcore Bayesianism rules the roost, and there seems to be an instinctive impulse towards formalism that is perhaps not as widely shared among likely readers of RAIL.  That said, at times they hit on ideas and ways of seeing things that are fascinating and useful to consider.

One of those ideas is that of a “semantic stopsign“, the mark of which is “failure to consider the obvious next question.” As the examples make clear, the upshot of this is someone’s tendency to over-rely on a particular answer to tough questions, to rely on it as something like a conversational deus ex machina.  If, for instance, I am willing to question the ability of any institution to solve social problems but seem mysteriously unable to apply the same scrutiny to “god” or “liberal democracy” or “the free market”, then those things are, for me, semantic stopsigns.  When a chain of discursive reasoning brings me to my stopsign I simply stop asking critical questions, automatically satisfied that nothing further need be said.

Semantic stopsigns seem to me to be a familiar phenomenon, but one I’ve not seen discussed very much or labeled with that sort of precision before.  One wonders what a list of common semantic stopsigns would look like, and more importantly, what argumentative strategies one might use to circumvent them.




Read Full Post »

Thinking about the last post got me wondering if anyone besides myself regularly covers forms of irrationality that are studied in the social sciences in their Critical Thinking or Informal Logic classes.  It seems to me to be important for students to know about things like the endowment effect, the bandwagon effect, confirmation bias, framing problems, and groupthink (among others).  These irrational tendencies in persons and others like them certainly present obstacles to critical thinking that (we hope) can be mitigated to at least some degree by the concepts and techniques we teach.  And yet there’s not exactly a huge volume of literature bringing together critical thinking and the empirical study of phenomena like these.

What place, if any, does teaching about the empirical study of irrationality have in your overall pedagogy? Do you think it should have a place in the study of critical thinking, or should we be content to let the scientists work on it? Is it even reasonable to think that training in critical thinking help prevent these kinds of irrationality? If you do include presentations about the forms of irrationality studied by psychology, economics, &c., how do you do it?

Read Full Post »

I hadn’t heard of this before, but in a very interesting article on his blog Predictably Irrational (after the excellent book of the same name) behavioral economist and theorist about rationality Dan Ariely describes what he calls the endowment effect:

[T]he endowment effect [is] the theory that once we own something, its value increases in our eyes.  […]

But ownership isn’t the only way to endow an object or service with meaning. You can also create value by investing time and effort into something (hence why we cherish those scraggly scarves we knit ourselves) or by knowing that someone else has (gifts fall under this category).

And then there’s the power of stories: spend a fantastic weekend somewhere, and no matter what you bring back – whether it’s an upper-case souvenir or a shell off the beach – you’ll value it immensely, simply because of its associations.

I’ve got to think that this effect is something in which argumentation theorists and researchers should have an interest, as it seems to fit handily into accounts of all sorts of biases and blind spots that hobble the abilities of persons to think critically about their own positions or standpoints as well as those of others.  Of particular interest is that research like Ariely’s might help to explain why a conclusion often seems more compelling to many people when the speaker relates her particular path to arriving at it in the form of a narrative rather than by giving an argument for it.

You can read the full story, which includes the account of Ariely’s recent experiment on the endowment effect here.

Read Full Post »

« Newer Posts