And judging from this article, it is likely to be, unlike prior iterations, much less useful, and not simply because of the lag time between data collection and publication (at least two years, maybe more). All indications are that the NRC planning committee was captured by interest groups representing smaller universities, who pushed 'per capita' measures, for the obvious reasons (one superstar on a faculty of 15 is worth a lot more than 1 superstar on a faculty of 30). Here's a taste of what to expect:
Kuh provided new details on how the NRC is constructing three “supplemental measures” that will be both part of the main rankings and available individually. Although she called them “supplemental,” Kuh said that they are actually “essential measures” for doctoral programs. They are scholarly productivity, student outcomes and support and diversity.
It's unclear whether these factors will be alagamated, or presented separately; if the former, then the overall result will be as meaningless as a U.S. News ranking of law or medical schools or colleges based on a dozen different, and incommensurable factors.
In each of these cases, data will support the rankings, but faculty surveys have been used to weight the relative importance of different factors that make up the analyses. While the scholarly productivity measure is closest to the values that shape the overall ranking, Kuh stressed that all of these measures matter. “The quality of doctoral programs is not just about the scholarly productivity and scholarly recognition of program faculty,” she said.
Of course, in philosophy, we went through a variation on this nonsense during the Heckling campaign seven years ago: on the one hand, everyone knows that the scholarly distinction of the faculty is central, but it can be a defeasible reason for choosing a program under particular circumstances, as when the faculty are disengaged from instruction and mentoring. But we have no way of measuring that, nor does the NRC.
For each subcategory, there are further subcategories:
- For scholarly productivity: Average publications per faculty member, average citations per publication, grants per faculty member, awards per faculty member.
- For student support and outcomes: Percentage of graduate students with full support, average cohort completing program in six years, average time to degree, job placement of students, and availability of outcomes data.
- For diversity: Percentage of professors from underrepresented minority groups, percentage of faculty members who are women, percentage of students who are from underrepresented minority groups, percentage of students who are female and percentage of students who are international.
There will be some definition shifts based on discipline. For example, on the measure of percentage of entering cohorts finishing within six years, the measure for the humanities will be eight years. Then, for each subsection of the subcategory, faculty surveys are being used to weight the various factors. So under scholarly productivity, for example, faculty members in the sciences are counting grants as a much larger share than are humanities professors.
The questions Friday didn’t challenge the importance of any of the categories, but raised concerns about how they are being measured. One dean said that her agriculture science professors were bothered by the idea that grants are being counted by their number, without regard to their quality, importance or size. So a faculty member who receives $1,000 from a local agricultural producer to study some local problem is counted the same way as a faculty member who pulls down a large, peer reviewed grant from a prestigious national agency. The dean said that there was “a lot of angst” in some disciplines over such apparent flaws in the methodology.
Another dean raised a question about how success is measured in the diversity categories, and was told that the greater the diversity, the greater the score. In many of the diversity categories, that may make sense, and many departments have relatively low percentages, for example, of minority faculty members. But he said that the international students ranking was potentially deceptive under this system. The dean said that any graduate program that doesn’t attract any foreign students probably deserves to go down in the rankings. But he said that a program where 95 percent of the students are international isn’t necessarily better than one with 40 percent — and in fact is quite likely a worse program.
One could, of course, mulitply the worries about applying these kinds of criteria to philosophy programs.
Meanwhile, the wickedly funny, self-proclaimed "cranky jerk" philosopher notes regarding the impending release in January of the new PGR:
What joy. For not only will undergraduates not be at the mercy of their (often clueless or biased) professors' impressions of what's what and who's who in the profession, but the Philosophy blogosphere will once again be alive with petty, misinformed, idiotic, self-serving whining about the PGR, all dressed up in the guise of righteous indignation.
Now, in addition, the misinformed and self-serving can also refer to the NRC, in the event that a department fares better there than in the PGR. We'll look at such cases when they arise; I strongly suspect that the explanation will be traceable to some of the peculiarities of the new NRC exercise noted above. S
In any case, since more real information is better than less, one may hope that the NRC report will, in the end, be more informative than this preview of its methodology suggests it will be.
Recent Comments