Posts Tagged ‘Future of Humanity Institute’

Less Wrong is a blog sponsored by Oxford University’s Future of Humanity Institute: a research group devoted mostly to issues in AI development aimed at increasing human intelligence.  While many posts center on those issues, the folks over there frequently consider ideas about rationality and reasoning.  Essentially, hardcore Bayesianism rules the roost, and there seems to be an instinctive impulse towards formalism that is perhaps not as widely shared among likely readers of RAIL.  That said, at times they hit on ideas and ways of seeing things that are fascinating and useful to consider.

One of those ideas is that of a “semantic stopsign“, the mark of which is “failure to consider the obvious next question.” As the examples make clear, the upshot of this is someone’s tendency to over-rely on a particular answer to tough questions, to rely on it as something like a conversational deus ex machina.  If, for instance, I am willing to question the ability of any institution to solve social problems but seem mysteriously unable to apply the same scrutiny to “god” or “liberal democracy” or “the free market”, then those things are, for me, semantic stopsigns.  When a chain of discursive reasoning brings me to my stopsign I simply stop asking critical questions, automatically satisfied that nothing further need be said.

Semantic stopsigns seem to me to be a familiar phenomenon, but one I’ve not seen discussed very much or labeled with that sort of precision before.  One wonders what a list of common semantic stopsigns would look like, and more importantly, what argumentative strategies one might use to circumvent them.

Read Full Post »