Creating explanatory or theoretical models of complicated phenomena is one of the chief intellectual activities of academics in nearly every field. As we do this, it is salutary to remember that as powerful and helpful as our models can be they can also bewitch us too. Rather than providing us with a lens that helps us see the phenomena we study more clearly, they can inflict a kind of selective vision on us that shackles us to our grounding assumptions, forces interpretation in their terms and blinds us to important bits of information that lie outside their boundaries.
Sometimes, this can be funny. For example, I recall a bit of apocrypha about a philosopher who, upon first encountering black swans, rather than admit them as proof of that the conclusions of inductive arguments were underdetermined by their premises insisted instead that those black feathery things serenely gliding around on the water out there couldn’t possibly be swans at all.
Physicists are susceptible to this sort of thing too and they recognize it in this old and much beloved self-effacing joke. It is funny, but I can’t help but think as well that lurking somewhere in there is a new fallacy patiently awaiting discovery by some intrepid researcher in argumentation theory. Certainly being in the grips of a model is a common enough cause of poor argumentation to warrant designation as a fallacy of some kind. I’m willing to start the process if you are. Post a short description of your candidate for the new fallacy here in the comments section. Best entry wins…er…let’s say eternal glory. 🙂
I’ve always heard a slightly different version of that joke that ends “Assume a perfectly spherical chicken.”
Don’t we already have a name for this phenomenon? Alfred North Whitehead’s Fallacy of Misplaced Concreteness which usually goes by the name of reification:
http://en.wikipedia.org/wiki/Reification_%28fallacy%29
Interesting. I’d heard of reification/hypostatization before, but never applied to the case of conflating a model with reality. I grant that in one sense this seems alright–both involve a sort of word/world confusion. Still I can’t help but think that the confusion involved might be different enough to merit separate treatment.
In reification/hypostatization (at least as I’ve come to understand it) the error seems to be one of taking what should be interpreted figuratively in a literal way. By contrast, the spherical cow/modeling case doesn’t seem to be so much a case of reading figurative language literally as it is a case of slipping into a subtle sort of overestimation of the explanatory power or scope of the model. It really reminds me more of a modal fallacy (confusing possibility with actuality, etc.).
Still thinking it through, though. 😉
Thanks for the reply!
[…] in empirical research on legal reasoning, and avoids the sort of worries I’ve expressed before on this blog about getting the theoretical cart before the horse. For all its merits this research […]