As it does for the current report, the 2006 rankings listed the names and affiliations of those who participated in the report, along with the survey instrument and a bit of information about the response patterns of raters. Based on this information, we can say a little bit about where the raters come from. For example, in 2006 about sixty five percent of raters were based in the U.S., eighteen percent in the UK, eight percent in Canada, five percent in Australia or New Zealand, and the small remainder elsewhere. We can also use the PGR scores of departments to see how raters were distributed across schools in 2006:
(PNG, PDF.) In 2006 the median department got a PGR score of 2.7. There were 99 departments in the 2006 survey, so getting a 2.7 or higher got you into the top 50. As you can see, while there are at least some raters across the distribution of PGR scores, the majority come from departments with average or above-average scores. Raters from very high-scoring departments (i.e., scoring 4 or more---that's the top ten in 2006, roughly speaking) are very strongly represented. Note that you could construct a histogram like this for the 2011 data yourself if you wanted to, just by counting up the evaluators listed in the report description.
What about overall patterns in the voting? Here's a histogram showing the number of times raters voted. That is, how many departments did raters give evaluations for, bearing in mind that they could choose to assess all 99 departments, or just one.
Recent Comments