Mr. Carson, to his credit, has been fixing some of the errors in the original version of his post. He has limited the study to US and Canadian programs; he's corrected a number of errors in the original data, with help from philosophers at various departments; and he's revised the comparison of the placement results to the 2011 PGR rankings. Here's what he writes in the new version:
I had compared placement rankings based on data from 2000 to 2013 to faculty rankings from 2011 only. However, this was unfair. Either I needed to compare a school's overall placement ranking to a school's overall faculty ranking, or I needed to compare a school's placement rank by year to a school's faculty rank by year instead of mixing the two approaches together. I correct that mistake below by looking at the average overall faculty ranks compared with the average overall placement ranks.
The overall assumption of the Leiter Report seems to be that if you go to a school with well ranked faculty, you will get a better placement in both the short term and the long term, and hence, have a more successful philosophy career overall, than if you went to a school with lesser ranked faculty. I wish to test that assumption.
I calculated the average faculty rank for each school from 2004 -2013 and then compared this with the overall placement rank for each school (based on TT/T/permanent placements) from 2004-2013. If the overall assumption of the Leiter Report is correct, schools with overall faculty ranks that are great should, in general, have placement ranks that are great as well. Similarly, schools with overall faculty ranks that are poor should, in general, have overall placement ranks that are poor as well. Is this the case?
Looking at the initial placement ranks, it appears that this assumption is moderately supported. Looking at the chart below, one can see that there is a positive trend between overall faculty rank and overall placement rank. As a schools overall placement rank gets poorer (moves to the right), the overall faculty rank gets poorer as well (moves upwards). However, one can also see that there are many schools that do not follow this trend. In particular, there are several schools that rank very well in placement but do not rank well in faculty. Similarly, there are several schools that rank well in faculty but do not rank so well in placement.
How strong is this relationship? Using the correlation coefficient, the faculty rank explains (roughly) about 50% of the placement rank (Note: I am assuming faculty rank determines placement rank, not the other way around). However, that still leaves about 50% of the placement rank explained by other unknown factors.
He also found that specialty rankings had a slightly stronger correlation.
This is less ridiculous than what Mr. Carson did first time around, but it still does not make a lot of sense, since placement is backwards-looking and faculty quality is, at least as far as placement goes, forward-looking. If one is looking at placement from 2000-2013, it would be most meaningful to look at PGR ranking from, say, 1995 to 2008, since the students placed 2000-2013 would have been looking at schools during that period. My guess is the correlation would go up considerably, though it still wouldn't be anywhere close to 100% given all the other variables that go into placement success. (As a side-note, when I first ranked NYU highly in the late 1990s, I was lectured by various people that this wasn't reasonable, since the school didn't have a placement record yet. As I pointed out to those posing this objection, Ned Block, Kit Fine and others all had placed students in their prior jobs, it seemed a tad odd to think they wouldn't now. Now that Mr. Carson has fixed some of his data, guess who is #1 in his placement ranking?)
In addition, Mr. Carson still has no categorization of jobs other than tenure-track. But if he were to incorporate something like the widely used Carnegie Classifications, then I would be astonished if he didn't find an even stronger correlation between, say, PGR rank and placement at doctorate-granting institutions. Breaking out that information would be useful for students, since students have different preferences and aspirations for the kinds of jobs they hope to get.
UPDATE: It was just pointed out to me that while Mr. Carson, correctly, removed the UK and Australasian schools because he didn't understand what it meant to be placed into a job as a "Lecturer," it also turns out that he's not counting any North American graduates who get these jobs (which are typically better than tenure-track jobs, i.e., they are de facto permanent) in the placement results. So even for the North American programs, the "placement rankings" are undercounting significantly (especially, I should add, for the highest ranked PGR departments, whose graduates often head overseas [and often come from overseas]). I would urge the Philosophy News website to put a major WARNING on this data, something like, "Work in Progress--Not Yet Reliable."
Recent Comments