A new PGR always brings the aggrieved out of the woodworks, though I do wish they would learn some new routines. Herewith, to save time, the five most commonly repeated "objections" to the PGR, almost all without merit (the last one raises a legitimate issue, to which I'll return):
1. No one filling out the surveys really knows enough about everyone on the faculty to evaluate them all. No kidding! That's why we do a survey of hundreds of experts in many different fields. A good survey aggregates a lot of partial knowledge to give us a more complete picture. If any one individual could know as much as the 300 philosophers who complete the PGR surveys, then we could just ask that person, and be done. And, of course, in the absence of the PGR as a resource, that's what happens: students ask a couple of teachers, and that's the end of it. (And if your teachers are really "out of the loop" or in the grips of utterly idiosyncratic prejudices, then the student is really in trouble.) Anyone can look at the list of the 300 evaluators. If you really don't care about their opinion, then don't use the PGR (and good luck to you!).
2. Why rely on opinion surveys, there are objective measures of quality, aren't there? No, actually, there aren't, as the National Research Council in the U.S. discovered, having squandered millions of dollars on results no one takes seriously. Really, take a deep breath: there isn't any fact in the world that can prove or disprove the quality of particular philosophical work. All there is in philosophy is the opinion of experts. Research universities--in their hiring and tenure decisions--are based on the premise that the opinion of experts is what matters. We have nothing else to go on.
3. Isn't this just a 'popularity' contest? Only if you think the philosophical caliber of a faculty, which is what evaluators are asked to assess, is equivalent to popularity or friendliness. The whole rationale for a "snowball" sampling procedure, which is what the PGR uses, is to garner informed, expert opinion, not to gauge 'popularity'. No such procedure is, or could be perfect, but the PGR's is clearly "good enough" to provide some useful guidance to students identifying suitable programs for further study.
4. This whole report is biased against Continental philosophy, isn't it? No, it's not--in fact, Continental philosophers are, arguably, disproportionately represented in the evaluator pool compared to their presence in the profession at large. Unfortunately, there is a vocal fringe group of philosophers (the "Party-Line Continentals" as I've called them) who want to protect "Continental philosophy" as their turf, and so they have a vested interest in systematically misrepresenting the PGR, especially since the dozens of Continental philosophers and scholars who participate in the survey don't generally have a high opinion of this fringe.
5. The report encourages departments to be "conservative" in their hiring decisions. Where is the evidence? Given that the PGR also evaluates some 30 different philosophical sub-specialties, there is opportunity for departments to improve their national and international standing along multiple dimensions, and many departments, in fact, do just that (think Carnegie-Mellon or South Florida or Bowling Green and so on). What is true is that, as the sociologist Kieran Healy (Duke) found in studying prior iterations of the survey, all else being equal, appointing someone in language/mind/ metaphysics/epistemology gives a program a bigger boost in the overall results than appointing someone in, say, history of philosophy. ("All else being equal" means that the philosophers in question are of equal stature in their fields. In fact, appointing Alan Code or Terence Irwin in ancient, or Michael Forster or Raymond Geuss in Continental, clearly delivers more reputational bounce than appointing "B-team" philosophers in other areas.) Given my own philosophcial sympathies and interests, I wish it weren't so, but the question is whether the PGR created this valuation or simply records it. I'm certain the PGR didn't create it, but the more important question is whether it reinforces it. I welcome suggestions about how to handle the surveys so as not to reinforce existing professional opinion on this score, without at the same time engaging in manipulation of that opinion that would vitiate the value of the exercise.
ADDENDUM: "Snowball sampling," the technique utilized by the PGR, is not a disreputable method, it is the correct method to use when what is wanted is expert, "insider" information.
Recent Comments