UPDATE: Given the large number of (largely new) visitors coming to this post, and the various updates I am adding to it, I will leave it at the top for a few days. Regular readers are encouraged to scroll down for new postings. Thanks
=========================
The NRC ranking of doctoral programs for 2005-06 is finally out . A few quick points, and then some details about the results relevant to philosophy:
1. The NRC ranking is not a ranking of faculty quality, so not really comparable to the PGR, even the PGR from 2004-06, the closest in time to the data the NRC collected, which was for 2005-06. While in every past NRC report, the NRC collected and published expert opinion about faculty and program quality, this time the NRC did not. Only one of the 20 variables used by the NRC even has any connection to faculty quality, and that variable (major awards and grants [adjusted for faculty size], such as Guggenheims, NSF Fellowships, American Academy membership and the like) is only a very weak indicator, for a host of obvious reasons: only a small number of such awards occurred during the time period studied, so just one or two faculty can make a huge difference to the results on a small faculty; many of these awards favor some areas of philosophy over others [philosophers of science and logicians can get NSF awards, other philosophers usually can't; Guggenheim and ACLS Fellowships appear to favor historians of philosophy and value theorists over folks in philosophy of mind and epistemology; American Academy membership is 'chummy' and tends to go to "friends of friends" and older faculty, thus being a better backward-looking than forward-looking metric--a school with great younger faculty won't get picked up on this metric, etc.].
2. The NRC ranking purports to be a measure of program quality or attractiveness. (The idea that you could measure program quality without having any real measure of faculty quality is, in itself, astonishing.) It purports to do this by aggregating twenty different factors in the humanities, broken into three categories: Research Activity (meaning the one qualitative variable noted above, plus per capita publications, which imposes no quality control for journal, publisher, impact, etc., so is largely meaningless); Student Support and Outcomes (e.g., graduate student funding [rich, private schools fare better on average on this metric], job placement (without, as far as I can tell, any audit of the data schools reported), time-to-degree, availability of student health insurance, and several other variables); and Diversity of the Academic Environment (i.e., ethnic and gender diversity of the faculty and student body). Note an irony about the use of job placement, which I assume got significant weigh in both the overall rankings (about which more, below). A school reporting job placement in 2005-06 for the preceding five years would be reporting on the success of students who chose the school in the early-to-mid-1990s. The last NRC, which included a regular reputation ranking of faculty quality, came out in 1995. My guess is the correlation between the job placement statistics and the 1995 NRC report (and the mid-to-late 90s PGRs) is probably pretty strong, but that, of course, is because job placement is always a backwards-looking measure.
(I do want to emphasize that there is no indication that the NRC audited all the self-reported data from schools: on job placement, time to degree, even the faculty rosters and the CVs. I know from PGR experience that departments err in their self-reporting of information in only one direction. If I'm mistaken about this, please let me know.)
From this mass of data, the NRC constructed two rankings. The R-Ranking assigned weights to these variables in order to mimic, in effect, the results of a secret reputational survey of an unknown number of putative experts in each field (seriously). More precisely: "a sample group of faculty were asked to rate a sample of programs in their fields. Then, a statistical analysis was used to calculate how the 20 program characteristics would need to be weighed in order to reproduce most closely the sample ratings. In other words, the analysis attempted to understand how much importance faculty implicitly attached to various program characteristics when they rate the sample of programs." The NRC does not report the results of its reputational survey, amazingly. Nor is it clear on my reading whether there was any reason to think that faculty evaluating programs were even aware of, let alone interested in, some of the NRC variables. The NRC insists the R-Ranking is not a reputational survey, and that is right. It's essentially a weird and not very reliable approximation of a reputational survey of an unknown group of evaluators of unknown size. (UPDATE: On page 198 of the NRC report, we learn that "up to 200" evaluators for each field were surveyed, and on p. 286, we learn that a total of 171 philosophy faculty [no indication of how they were chosen, or what distribution of expertise or areas they represented] were each asked to evaluate not more than 50 programs; that each program had an average of 46.7 faculty evaluate it, with a low of 34 faculty for some programs, and a high of 57 faclty for some others. A typical PGR survey collects responses from between 250-300 faculty for each program evaluated, and, of course, the list of evaluators is public. Note that not all the philosophy programs were evaluated by any rater--see the comments by Stigler in Update #8, below.)
The NRC also calculated an S-Ranking, which assigned weights to the variables based on the weights respondents in each field said the criteria deserved. This is vulnerable to the confusions Ned Block (NYU) noted.
Given the huge range of variables and the baroque methodology (which will no doubt generate its own cottage industry of commentary), it should not be surprising that the results (for Philosophy) qualify as somewhere between "odd" and "inexplicable."
3. The huge time lag--the NRC Report released today is already five years out of date--is quite significant, especially for the "Research Activity" measures, which, because it employs per capita (or percentage of the faculty) measures, can be quite sensitive to just one or two faculty movements. And since the Research Activity measures were, it appears, given the most weight in the R- and S-Rankings, these changes are also quite significant. (One guesses, of course--I can't tell from the material I've seen--that it was the Awards & Grants that dominated, since per capita productivity is such an obviously poor measure.) So, consider that Yale, which according to the NRC is not close to the top 25 in either the R-Ranking or in "Research Activity," did not have on its 2005-06 faculty two highly 'decorated' and recognized senior philosophers, Stephen Darwall, who moved from Michigan, and Thomas Pogge, who moved from Columbia. (Even without them, it seems bizarre that Yale was not in the top 25 for "Research Activity.") Given the relatively small size of the Yale Department, it's hard to see how just these two, even by the NRC's criteria, would not have changed the results significantly. Similarly, the 2005-06 University of Chicago faculty roster would have included John Haugeland (Guggenheim winner, now deceased), Charles Larmore (Fellow of the American Academy, now moved to Brown), and William Wimsatt (productive and influential philosopher of biology, now retired). These examples could be multiplied in both directions, though I think the important point to remember is that the NRC was not really measuring the philosophical quality of the faculty.
Putting these criticisms and concerns to one side, there is data in the NRC Report that should be of interest to prospective students, particularly, the systematic data gathered on time-to-degree. I hope they will make that data easily available on the Internet.
Below the fold, a sample of some of the results for Philosophy.
And as far as I can tell, you could do points 1 and 2 without any actual First Amendment problems whatsoever, whatever the courts might say.
If you don't like 3-5, we could try adding--
3'. Pass jihad sedition laws and enforce them by seeding federal agents in mosques and arresting relevant imams.
4'. Local agencies refuse some mosque-building permits on plausible non-religious grounds. (E.g. "That coat factory has such wonderful architecture that we just can't destroy it.")
5'. Tear out all Muslim foot basins in public buildings.
6'. Federal and state enforcers of religious non-discrimination laws issue guidance statements to businesses indicating that a heck of a lot of things that have previously been required or thought to be required as "reasonable accommodation" of Muslim religion are actually _unreasonable_ accommodation and that they will get in _no trouble_ if they refuse these accommodations to Muslim employees.
7'. Revoke the passes of all Muslim prison chaplains.
8'. Begin extra screening of military personnel based on risk factors for Nidal Hassan-style violence, where these risk factors clearly include Islamic self-identification.
I believe that not a single one of these creates First Amendment problems. Remember re. #8 that military personnel do not as it is have all the constitutional freedoms, including rights to due process and freedom of speech (they can be punished for criticizing the President, for example), that civilians have. If any one would have to be cut on First Amendment grounds, it might be 7. But all the others should pass with flying colors.
They should cumulatively make Muslims in America feel uncomfortable. Which is fine with me.