MOVING TO FRONT, ORIGINALLY POSETD APRIL 2--MORE COMMENTS WELCOME; SEE REPLY BY DORIS ET AL. IN COMMENTS
(There's been lots of chatter on social media about the recent attempt to salvage "implicit bias" by philosopher John Doris and psychologists Laura Niemi and Keith Payne in Scientific American. At the suggestion of Edouard Machery [Pitt], I invited philosopher of cognitive science Sean Hermanson (FIU) to comment on these issues. His esssay follows and comments are open [they will be moderated for substance and relevance]--BL.)
Have you missed all the wrangling in philosophy journals and books over the (alleged) inexistence of implicit biases? In actuality, philosophers have mostly avoided scrutinizing findings about implicit biases and associated concepts, such stereotype threat and micro-aggressions. Debates among philosophers--even lukewarm ones--about the existence of implicit bias are hard to find.
Psychology is, arguably, undergoing a replication crisis although at least the problems are being discussed and reforms are underway. Meanwhile philosophy's own crisis of replication continues in the form of a slow motion crash powered by ideology and steered by confirmation bias. The crucial difference is that whereas in social psychology (and elsewhere) results are often not replicated, philosophy's problem runs in reverse: too often philosophers recycle poorly informed opinions about the efficacy of implicit biases. This curious homogeneity is somewhat concealed under the seat cushions which, nonetheless, often sport attractive designs referencing intricate mappings--of behavior to responsibility, to propositional states, and various other cognitive structures.
Now it is time to vigorously apply the brakes.
Or: not so, according to a recent article aiming to reassure audiences that despite the speed bumps (and speed mounds and even speed mountains) implicit bias will overcome its obstacles. Having once enjoyed a starring role, it turns out that IAT doesn't produce stable results and fails to correlate with actual discriminatory behavior. Nevertheless Payne et al. contend that implicit bias is real and important and IAT is useful. How can this be so?
Payne et al. argue that IAT was never meant to be individually predictive. Just as although scoring high on conscientiousness won't tell you if somebody's work right now is sloppy, yet knowing a whole workforce scores high predicts their work will tend to be done right, IAT is a blunt instrument and only reveals bias at the level of groups, not individuals. However, since groups are comprised of individuals, doesn't somebody's work (several really) have to be biased towards conscientiousness (setting aside the weird view that conscientiousness in me somehow affects your work but not my own)? In other words for many individuals we should be able to predict that their work will be conscientious rather than sloppy if conscientiousness exerts real casual effects on outcomes. Keep in mind that one's work "on a particular occasion" isn't at issue. It is only the overall pattern in an individual's actions that matters. Data points ought to exhibit group-level patterns whether they are drawn from the career of one conscientious individual or several. This is much the same as how a long standing pattern of a particular employee abstaining from hand-washing raises expectations customers served by that person will develop food poisoning.
We are also told that IAT is like an unreliable water-divining rod (perhaps then it should be abandoned?) but it doesn't matter because there are other tools strongly evidencing implicit biases. However, this further support is consistently unimpressive. Here's just one example: do "studies show" college professors are biased against Lamar and favor Brad when it comes to replying to emails? What the researchers showed was that although professors mostly in business schools and health sciences responded significantly less often, there was no effect for faculty in the humanities, basically no effect in the social sciences, a bit in the hard and life sciences (quite possibly marginal) and even "reverse bias" in the fine arts. Payne et al. describe this highly localized effect as part of a "widespread" pattern, thus giving the impression that college professors as a group exhibit callback biases. Except they don't (I wrote to Milkman a few years back about subfields and she reported that they did not have sufficient data for philosophy). Let us further note that there was no effect anywhere if the request was urgent (also, shouldn't implicit bias impose itself most strongly when there is less time for reflection?). And, curiously, there was no effect for Hispanic females under any condition.
Still, what's up with the business schools? I don't know. Perhaps it's a one-off result that will not be replicated. Or maybe business schools are full of Trump Jr.s ("TJs"). Supposing they are: why should callback data bear on the issue of whether biases are implicit or explicit? Indeed, it does not if TJs are fully aware of their attitudes about women and minorities. Meanwhile, even if TJs are scarce in health sciences, note that more than 70% of their students are women. Might this help explain why inquiries from men are given slightly more attention? Again, I don't know, but I am curious about why implicit bias magically turns itself off when we enter the College of A&S.
Let us step back and take stock.
In acknowledging these points we can call attention to the ubiquitous Glass Box fallacy, which is to claim it is transparent that the specific nature of an internal cognitive information processing mechanism is implicit rather than explicit on the basis of a crude behavioral measure that is in no way capable of making that determination. This happens constantly, including with many of Payne et al.'s examples. Take callbacks again: the fact that somebody fails to send an email is not evidence that they are under the influence of an implicit process anymore than that they are under the influence of LSD.
Another source of support concerns police shootings. Although this issue inflames intense emotions, there are strong doubts as to whether black suspects are disproportionately victims of wrongful shootings. Researchers have found over and over that police officers are not susceptible to false positives--though, let us agree, cops might use associative knowledge to process decisions to shoot faster for black suspects. Whatever is going on here, there is not clear evidence racial bias is explanatory when it comes to wrongful killings by police. Why then, Payne et al. argue, is there a correlation between racial disparities in shootings and findings of implicit bias? One answer is that regions where people have stronger associative knowledge of the correlation between race and crime tend to be those where there is more crime and more criminals who happen to be black, more police shootings, and more police shootings of black suspects, guilty or not. This explanation fits better with the fact that given base rates for criminality the relationship between race and police shootings is unsurprising. No doubt pointing this out will upset some who genuinely care about massive disparities in life outcomes between blacks and other groups. As Payne et al. urge, there is the matter of reflecting and acting to ameliorate social disparities interacting with race (among other characters) and stereotyping groups as inherently criminal shifts attention away from structural and historical contexts contributing to anti-social tendencies. All this said it is counterproductive to confuse this motivation with the need to obtain a correct representation of the cognitive mechanisms driving human behavior. Replacing a divining rod with a dowsing rod is not going to do that.
Turning back to the philosophy profession, far from having heated debates it is more common to find philosophers rarely acknowledge the scientific controversies. Doubts are not conducive to action--as when recent calls to dissipate chilly climates, especially pervasive hiring biases, were reaching a crescendo. Although some are trying to reframe the conversation as one about demographic diversity in journals, the former kind of activity drastically tapered off at the same time data began to strongly indicate that implicit biases conspicuously failed to reveal themselves in the high-stakes matter of philosophy hiring decisions--going back to at least 2005 (i.e. as far back as we have reliable data). Indeed, even more surprising is that since 2010 women seem to enjoy significant advantage on the market. In order to understand demographic variance we need to think more about pre-university influences, not implicit biases.
In the aftermath of the collapse in confidence about implicit bias's relevance to issues in the philosophy profession, central players have begun shifting to safer territory. But instead of groundwork for more questionable scholarship along lines already noted, philosophy ought to be having conversations about practices which may have exacerbated its own crisis. In my darker moments I think Dennett gets it backwards in urging philosophers to pay more attention to the sciences. At least when armchair metaphysicians get things wrong, there are no real world resources wasted or misguided policies to be burdened with.