For the first time, we asked expert evaluators to score departments not only in overall faculty quality but for faculty quality in the various areas of specialization. I know that many folks are eagerly awaiting the publication of all the results, so I thought it might be useful to preview, in detail, the results in one area in order to prepare readers as to how the results are to be interpreted.
Here is how to read the results:
Because, in many cases, the ratings reflect the presence of only one or two faculty in a department, the Advisory Board decided that we would not publish the precise scores. Programs are placed in “groupings” based on the rounded mean (rounded to the nearest .5). Next to each grouping, you will find the rounded mean for that group; next to the name of each program within that group you will find the median score for that faculty in parentheses. Within a grouping, programs are listed alphabetically. Only programs with a rounded mean of “3” (meaning “Good”) or higher are so grouped. (Grouping by rounded mean obviated the need to standardize scores.) A category of “Also Notable” includes programs with a rounded mean less than 3.0, but a median of at least 3.0: thus, programs in this “Also Notable” categories are ones that at least half of the evaluators ranked as “Good” or better.
Following some of the listings, the Board will have chosen to include mention of other faculties notable in that specialty but which were not included in the faculty quality survey (usually because the overall faculty was unlikely to have ranked in the U.S. top 50, the U.K. top 15, etc.).
The primary purpose of the specialty rankings is to identify programs in particular fields that a student should investigate for himself or herself. Because of the relatively small number of raters in each specialization, students are urged not to assign much weight at all to small differences (e.g., being in Group 2 versus Group 3). More evaluators in the pool might well have resulted in changes of .5 in rounded mean in either direction; this is especially likely where the median score is either above or below the norm for the grouping. Also bear in mind that, in general, programs with more faculty specializing in an area tended to be rated more highly than those with just one philosopher in the field.
A note about two special cases. First, evaluators were asked to evaluate the University of London
faculties as a whole, even though the individual colleges run separate admissions procedures. But there is a good deal of cooperation and interaction between the faculties and their graduate students, so it seemed useful for students to have information about how the whole is evaluated. However, in the groupings, below, we list only the colleges, noting, when appropriate, at the end the score of the aggregated faculties. Second, the University of St. Andrews/University of Stirling
Joint Program is listed only in those cases where its combined rating is higher than the rating for St. Andrews considered by itself.
It is worth noting that the results were checked for evidence of strategic voting; there was none. Evaluators were admirably responsible and honest in their assessments, and there were fairly high levels of consensus on the strengths of the faculties among the evaluators who completed the surveys.
Remember: evaluators were not permitted to evaluate either their own department or the department from which they received their highest degree (PhD, DPhil, sometimes the BPhil).
EPISTEMOLOGY
Group 1 (1) (mean of 5.0)
Rutgers University, New Brunswick (5.0)
Group 2 (2) (mean of 4.5)
Oxford University (4.5)
Group 3 (3-6) (mean of 4.0)
Brown University (4.0)
New York University (4.0)
Princeton University (4.0)
University of Notre Dame (4.0)
Group 4 (7-17) (mean of 3.5)
Arizona State University (3.0)
Columbia University (3.5)
University of Arizona (3.5)
University of California, Berkeley (3.5)
University of Massachussetts, Amherst (3.5)
University of North Carolina, Chapel Hill (3.5)
University of Pittsburgh (4.0)
University of Rochester (3.5)
University of St. Andrews/University of Stirling Joint Program (3.5)
University of Washington, Seattle (3.5)
Yale University (3.5)
Group 5 (18-31) (mean of 3.0)
Australian National University (3.5)
Cambridge University (3.0)
Cornell University (3.25)
Indiana University, Bloomington (3.0)
Johns Hopkins University (3.0)
University College London (3.0)
University of California, Los Angeles (3.0)
University of Iowa (3.0)
University of Michigan, Ann Arbor (2.5)
University of Missouri, Columbia (3.0)
University of Oklahoma, Norman (3.0)
University of St. Andrews (3.0)
University of Texas, Austin (3.0)
University of Wisconsin, Madison (3.0)
Also Notable (median of 3.0): Stanford University
In addition, the aggregated faculties of the colleges making up the University of London received a rounded mean score of 3.5 and a median score of 3.75.
In the judgment of the Advisory Board, the following programs that were not part of the survey ought to be considered by students interested in this area: Fordham University; Loyola University, Chicago.
Evaluators: Brad Armendt, David Christensen, Juan Comesana, Earl Conee, Tom Crisp, Jonathan Dancy, Keith DeRose, Richard Feldman, Bryan Frances, Tamar Gendler, Brie Gertler, Anthony Gillies, Alvin Goldman, Delia Graff, Patrick Greenough, Anil Gupta, Gilbert Harman, John Hawthorne, Christopher Hookway, Robin Jeshion, Douglas Jesseph, Peter Klein, Hilary Kornblith, Jonathan Kvanvig, Jennifer Lackey, Brian McLaughlin, Cheryl Misak, Ram Neta, George Pappas, Peter Poellner, Duncan Pritchard, Jonathan Schaffer, Brian Skyrms, David Sosa, Ernest Sosa, Robert Stern, Stephen Stich, J.D. Trout, Ted Warfield, Brian Weatherson, Ralph Wedgwood, Timothy Williamson.
Recent Comments