IHE has a useful story and summary of the 200-page (!) document. What made the earlier NRC reports (1982, 1995) useful was they included systematic surveys of experts in different disciplines evaluating program faculty and training of students. That is no more. According to the IHE article, the worry about expert evaluation was that, "Many people assume departments at outstanding universities must be outstanding as a result, even if that's not the case, or people who associate certain stellar researchers with a department may not know that they have retired." Dare I observe that there is a pretty simple solution to these problems: ask experts to evaluate faculty lists, not university names, and make sure the faculty lists are current and exclude those who are retired, dead, not really teaching etc.
Instead of the peer evaluations that made the prior NRC reports so important, programs will now be evaluated using 21 different variables--many different in kind from each other (see below)--and all weighted differently. Here are the variables being utilized (I wish I were making this up, but, really, I'm not!):
The 21 Program Characteristics Listed in the Faculty Questionnaire.
The weightings to be used in the case of philosophy programs are not yet public--the weightings were determined in each case by a survey of people in the field. No doubt many of these individual measures will be illuminating, but the idea of aggregating them in order to say that "Ivy University is in the 5-15 cluster" will produce a meaningless, 'nonsense' number: what does it mean to say Ivy University is somewhere between 5th and 15th based on some aggregation of the number of publications per faculty member, the number of international students, the number of non-Asian minority faculty, and the number of student support activities? Who would care about such an aggregation? What is most distressing is that the NRC has eliminated any meaningful measure of faculty quality, relying on factors that have no qualitative dimension (e.g., publications per faculty member) and proxies for quality like grants and honors, some of which are certainly probative (e.g., Guggenheim or NEH Fellowships), others of which will just reinforce traditional hierarchies because of their insular and self-reinforcing nature (e.g., American Academy of Arts & Sciences membership)).
And then, of course, there is the delay issue. Most of the data collection on faculty took place over three years ago. Among those who would have been included for philosophy at UT Austin, for example, are Robert Kane [now retired], me, and Robert C. Solomon [now deceased]. Chicago's evaluation will presumably include William Wimsatt (now retired), John Haugeland (retiring next year), and Charles Larmore (left for Brown). One Ohio State department reports that more than 20% of the faculty is new since the time they submitted the faculty questionnaires to the NRC, while nearly 20% of the faculty at OSU then have either left or retired. There will obviously be substantial variation in how much these changes in faculty rosters over the last 3-4 years matter, but in some cases, they will be very significant.
In any case, I would be most interested to hear what philosophers think of the variables the NRC is using and also what they think of the idea of an aggregation of such variables. Non-anonymous comments preferred, though you must at least submit a valid e-mail address; submit comments only once, they may take awhile to appear.