MOVING TO FRONT, ORIGINALLY POSETD APRIL 2--MORE COMMENTS WELCOME; SEE REPLY BY DORIS ET AL. IN COMMENTS
(There's been lots of chatter on social media about the recent attempt to salvage "implicit bias" by philosopher John Doris and psychologists Laura Niemi and Keith Payne in Scientific American. At the suggestion of Edouard Machery [Pitt], I invited philosopher of cognitive science Sean Hermanson (FIU) to comment on these issues. His esssay follows and comments are open [they will be moderated for substance and relevance]--BL.)
Have you missed all the wrangling in philosophy journals and books over the (alleged) inexistence of implicit biases? In actuality, philosophers have mostly avoided scrutinizing findings about implicit biases and associated concepts, such stereotype threat and micro-aggressions. Debates among philosophers--even lukewarm ones--about the existence of implicit bias are hard to find.
Psychology is, arguably, undergoing a replication crisis although at least the problems are being discussed and reforms are underway. Meanwhile philosophy's own crisis of replication continues in the form of a slow motion crash powered by ideology and steered by confirmation bias. The crucial difference is that whereas in social psychology (and elsewhere) results are often not replicated, philosophy's problem runs in reverse: too often philosophers recycle poorly informed opinions about the efficacy of implicit biases. This curious homogeneity is somewhat concealed under the seat cushions which, nonetheless, often sport attractive designs referencing intricate mappings--of behavior to responsibility, to propositional states, and various other cognitive structures.
Now it is time to vigorously apply the brakes.
Or: not so, according to a recent article aiming to reassure audiences that despite the speed bumps (and speed mounds and even speed mountains) implicit bias will overcome its obstacles. Having once enjoyed a starring role, it turns out that IAT doesn't produce stable results and fails to correlate with actual discriminatory behavior. Nevertheless Payne et al. contend that implicit bias is real and important and IAT is useful. How can this be so?
Payne et al. argue that IAT was never meant to be individually predictive. Just as although scoring high on conscientiousness won't tell you if somebody's work right now is sloppy, yet knowing a whole workforce scores high predicts their work will tend to be done right, IAT is a blunt instrument and only reveals bias at the level of groups, not individuals. However, since groups are comprised of individuals, doesn't somebody's work (several really) have to be biased towards conscientiousness (setting aside the weird view that conscientiousness in me somehow affects your work but not my own)? In other words for many individuals we should be able to predict that their work will be conscientious rather than sloppy if conscientiousness exerts real casual effects on outcomes. Keep in mind that one's work "on a particular occasion" isn't at issue. It is only the overall pattern in an individual's actions that matters. Data points ought to exhibit group-level patterns whether they are drawn from the career of one conscientious individual or several. This is much the same as how a long standing pattern of a particular employee abstaining from hand-washing raises expectations customers served by that person will develop food poisoning.
We are also told that IAT is like an unreliable water-divining rod (perhaps then it should be abandoned?) but it doesn't matter because there are other tools strongly evidencing implicit biases. However, this further support is consistently unimpressive. Here's just one example: do "studies show" college professors are biased against Lamar and favor Brad when it comes to replying to emails? What the researchers showed was that although professors mostly in business schools and health sciences responded significantly less often, there was no effect for faculty in the humanities, basically no effect in the social sciences, a bit in the hard and life sciences (quite possibly marginal) and even "reverse bias" in the fine arts. Payne et al. describe this highly localized effect as part of a "widespread" pattern, thus giving the impression that college professors as a group exhibit callback biases. Except they don't (I wrote to Milkman a few years back about subfields and she reported that they did not have sufficient data for philosophy). Let us further note that there was no effect anywhere if the request was urgent (also, shouldn't implicit bias impose itself most strongly when there is less time for reflection?). And, curiously, there was no effect for Hispanic females under any condition.
Still, what's up with the business schools? I don't know. Perhaps it's a one-off result that will not be replicated. Or maybe business schools are full of Trump Jr.s ("TJs"). Supposing they are: why should callback data bear on the issue of whether biases are implicit or explicit? Indeed, it does not if TJs are fully aware of their attitudes about women and minorities. Meanwhile, even if TJs are scarce in health sciences, note that more than 70% of their students are women. Might this help explain why inquiries from men are given slightly more attention? Again, I don't know, but I am curious about why implicit bias magically turns itself off when we enter the College of A&S.
Let us step back and take stock.
Recent Comments