Here is an anecdote. A few years ago, Justin Sytsma and I published an experimental-philosophy paper ("Two Conceptions of Subjective Experience") in Philosophical Studies arguing that the lay concept of subjective experience does not correspond to the concept of P-consciousness. Brian Talbot wrote an incisive criticism of that paper, arguing among other things that our vignettes had simply elicited "System 1" (roughly, fast, non-reflective) intuitions that did not reflect people's genuine concept of consciousness. Justin Sytsma and I decided to test Brian Talbot's empirical conjecture by showing that our results did not change when care was taken to elicit slow, reflective ("System 2") intuitions (here). One of our manipulations was inspired by a then much talked about paper by Alter and colleagues ("Overcoming intuition: Metacognitive difficulty activates analytic reasoning") arguing that participants were more careful and reflective when they were presented with texts difficult to read (e.g., printed in a hard-to-read font). Justin Sytsma and I found that a similar manipulation did not affect participants' judgments about consciousness - as we had predicted against Brian Talbot's conjecture - and we included this empirical result in our response. The twist is that we know now that the manipulation reported in Alter and colleagues' paper was a false positive! One of our studies was thus premised on an illusory empirical finding. (Fortunately, this was only one of our results.)
This anecdote illustrates one of the perils of empirical philosophy (serious moral psychology, philosophy of cognitive science or neuroscience, philosophy of biology, naturalistic philosophy of mind, experimental philosophy, etc.): The findings empirical philosophers rely on can turn out to be false positives, they can be reinterpreted in many different ways, they can result from some confounds, etc.
Here are a few further examples in the recent philosophical literature. The literature on aliefs (including Tamar Gendler's papers) often discusses Bargh's classic priming experiments, particularly his famous 1996 article ("Automaticity of social behavior: Direct effects of trait construct and stereotype activation on action"). In this paper, Bargh reports that participants primed with words associated with being old walked more slowly to an elevator than control participants. Like many other papers in the priming literature, several studies in this paper have failed to be replicated.
The literature on the cognitive penetration of perceptual experience has cited many recent studies in social psychology, including the famous backpack experiment: People are said to estimate a hill as having steeper slant when they wear a heavy backpack. Many of these studies turn out to be straightforward experimenter demand effects (see "Cognitive penetration: A no-progress report" in the just published The Cognitive Penetrability of Experience as well as Firestone and Scholl's forthcoming paper).
There has been a lot of discussion of implicit bias - whether they are mental states or traits, whether they are propositional or associative, whether we can be responsible for having them or for the actions they influence, etc. - and for a long time implicit biases have been described, more or less explicitly, as having a large influence on behavior. We now have good reasons to believe that their influence is very small, and that implicit bias measures predict behavior very poorly (see this paper).
These and other examples raise the following question: How can philosophers minimize the risk of errors when they rely on the scientific literature? John Doris and I have recently written a paper on this issue, and I'll talk about it in my next post.
Recent Comments