The two-sided quality of the connection between departments and specialties invites us to find ways of visualizing them both at the same time. But the large number of departments and specialties makes it tricky to generate interpretable pictures. There is a large family of methods designed to map multidimensional data onto just a couple of dimensions. Here I'll take one of the more straightforward ways of doing this and apply it to the 2006 data.
(Edit: fixed a broken link below. If you got 404'd before, reload and try again.)
One of the nice features of the PGR data is the duality in the relationship between departments and specialties. Departmental identities are defined in part by the kind of specialized work that gets done in them. The identity of areas is associated with particular departments and schools (with a large or small 's'). The PGR data lets us see some of this association, and of course also make the link between this relationship and overall status. Like departments, some areas are judged more important than others.
One difficulty with visualizing the connection between specialization and departments is that there are too many dimensions of specialization, and a lot of departments, as well. On the one hand we'd like a nice picture with a lot of useful information, but on the other hand we need to make it visually comprehensible and true to the data. Here's one way to do this. It doesn't succeed perfectly in these aims, but I like it anyway.
I want to get to the department-level stuff today instead of just looking at the raters, but I promised yesterday that I'd say something about the relationship between the field position of raters and their voting patterns. As with specialty areas, where you stand might depend on where you sit. If we slice raters into groups based on the PGR rating of their employer, we can calculate overall PGR scores based just on the votes from within each group, as we did with the specialty areas. For example, we can divide them into quintiles, plus one extra group for the raters who participate in the survey but whose departments are not rated. (There were a few of those in 2006.) The story is the same as yesterday, only moreso: the rank order produced by different quintiles is very similar, there's hardly any variation in the top eight or nine departments, and the heterogeneity that does exist is for assessments of departments in the middle to lower-middle of the ranking table. So at least within the pool of raters, the people at lower-ranked schools produce more or less the same ranking as the people at higher-ranked schools.
While the rank order produced by different rater quintiles is very similar, the average scores awarded by each group do differ a bit. Here is a plot showing differences in the average scores awarded by raters employed by departments in the top twenty percent and those working at departments in the bottom twenty percent of the PGR. Again, this is 2006 data.
Yesterday we saw that raters come mostly from the top half of of PGR ranked schools, with a good chunk of them from very highly-ranked schools. We also saw that specialty areas are not equally represented in the rater pool. (Specialty areas are not equally represented within departments, either, because not all subfields have equal status---more on that later.) Are voting patterns in the 2006 data connected to the social location of raters? Well, we can only say a little about this given the data constraints. But let's see what can be said.
First, voting frequency. Might it be the case that how many votes a rater casts is related to the PGR score of their home department? It's easy to think of reasons why this might be true. For example, what if people working at highly-ranked departments are highly opinionated (I know this seems very unlikely, but bear with me) and are happy to vote on every single department in the survey? Alternatively, it might be that people at high-ranking departments are somewhat snobbish (another wildly speculative notion, I admit) and this leads them to care not a whit for 85 of the 99 deparments in the survey. What do the data say?
As it does for the current report, the 2006 rankings listed the names and affiliations of those who participated in the report, along with the survey instrument and a bit of information about the response patterns of raters. Based on this information, we can say a little bit about where the raters come from. For example, in 2006 about sixty five percent of raters were based in the U.S., eighteen percent in the UK, eight percent in Canada, five percent in Australia or New Zealand, and the small remainder elsewhere. We can also use the PGR scores of departments to see how raters were distributed across schools in 2006:
(PNG, PDF.) In 2006 the median department got a PGR score of 2.7. There were 99 departments in the 2006 survey, so getting a 2.7 or higher got you into the top 50. As you can see, while there are at least some raters across the distribution of PGR scores, the majority come from departments with average or above-average scores. Raters from very high-scoring departments (i.e., scoring 4 or more---that's the top ten in 2006, roughly speaking) are very strongly represented. Note that you could construct a histogram like this for the 2011 data yourself if you wanted to, just by counting up the evaluators listed in the report description.
What about overall patterns in the voting? Here's a histogram showing the number of times raters voted. That is, how many departments did raters give evaluations for, bearing in mind that they could choose to assess all 99 departments, or just one.
I come in peace. As Brian mentioned last week, I'm going to be guesting on his blog for the next few days. For those of you who don't know me—which I imagine is most of you—I am a sociologist; I teach at Duke University both in my home department and the Kenan Institute for Ethics; and for the past nine years or so I've been a blogger at Crooked Timber. Initially, I was tempted to treat this gig in the way that people tend to treat philosophers they meet in bars—viz, aggressively tell you all what my philosophy is, perhaps make a truly original joke that comes with fries, or maybe sketch out my own interpretation of two-dimensionalism. (The latter is typical of certain sorts of bars only.) On mature reflection I decided against these options, promising though they were. Instead, I'll mostly be telling you about some analysis I've done of the PGR. The data I'll be relying on come partly from information available on the PGR website itself, and partly from rater-anonymized versions of the 2004 and 2006 waves provided to me by Professor Leiter. I presented some of this material last month at a panel at the Central APA meetings, and I have also presented it to various Sociology and (once or twice) Philosophy departments in the recent past. In my posts here I'll begin by focusing on some of the questions that Philosophers tend to have about the data, but I also hope to get to some of the reasons for why the PGR is an interesting entity in comparison to many other efforts to rank departments or other entities in academia, and why ranking has become so common in recent years.
So, first common question. Every department in the survey is ranked based on its mean overall reputational score. What sort of variability is there around those means?
I'm very pleased to report that Kieran Healy, a sociologist at Duke University and informed observer of the philosophy profession, will be guest-blogging here the week of March 19. Professor Healy participated with me (and Jonathan Dancy and Jennifer Saul) at the Central APA panel on academic rankings in Chicago last month, and I invited him last week to write about some of the interesting analysis he presented there about the 2004 and 2006 PGR surveys (I will be giving him the 2011 data this summer, so that he can extend the analysis). I expect he will write a bit about that, and, I hope, also about his own research in sociology, which is extremely interesting.