I print, in its entirety, a response I received from Simona Bizzozero, the Head of Public Relations at QS. I intersperse her response with some comments and corrections of my own, and then conclude with some questions that are still unanswered:
Dear Professor Leiter,
In response to the concerns that you raise about the QS academic reputation survey, we would like to clarify several serious inaccuracies in the piece. While we respect your right to disagree with the measures we employ to compile our rankings, we feel it is important that you base your conclusions on the correct information.
As we will see, there were no "serious" inaccuracies, just a single minor one, to which we will return shortly. In fact, the main allegations of my correspondents are actually now confirmed by QS.
First, some background: The QS academic reputation survey is one of six indicators used to compile the QS World University Rankings, and also feeds into various other QS research outputs and rankings. In 2012 over 46,000 valid responses worldwide were considered, making it the largest survey of its kind in the world.
QS's conception of a "valid" response will become clear shortly.
In May this year, QS became the first compiler of global and regional rankings to be awarded the “IREG Approved” label for three of its research outputs (including the QS World University Rankings). This followed a comprehensive independent audit of the methodology and all data collection processes (including the academic reputation survey), to which QS voluntarily submitted.
What is IREG, you ask? Here is its Executive Committee; it includes Bob Morse, who produces the notorious U.S. News rankings. Mr. Morse is a journalist turned editor, with no qualifications as an expert on higher education, statistical methods, or anything else one might think relevant. The presence of Mr. Morse on the Executive Committee, and several others involved in the business of world university rankings, raises doubts about the integrity of IREG--doubts made worse when one realizes that U.S. News now reprints the QS rankings under its own brand! In other words, the QS rankings are "approved" by an organization whose executive committee includes an editor who profits off the QS rankings.
The purpose of the audit, conducted by independent experts, was to verify that the rankings are compiled professionally and with a transparent methodology. Successful organisations also need to show that they observe good practices and respond to a need for relevant information from a range of stakeholders, in particular students, higher education institutions, employers and policy makers.
The QS World University Rankings were found to satisfy all of these criteria. We therefore feel justified in querying the use of the term “shady” in relation to our rankings, and also in pointing out that your claim that our rankings are “a fraud upon the public” is contradicted by the established facts.
So far, there are no "established facts" at all. There is an alleged audit by an organization that is not at all "independent" of QS claiming that the QS rankings "are compiled professionally and with a transparent methodology." Until the audit is published, and the independent "experts" named, this is all public relations, and nothing more. In fact, as we consider some examples of the complete lack of relevant transparency in the QS methodology, this will just raise further questions about IREG, which looks to me, at this point, like a front organization for legitimating bogus ranking systems.
It is important to note the following basic facts regarding the QS academic reputation survey:
-
Only individuals currently employed as academics at recognized universities are
invited to participate in the survey-
The identity of all respondents is verified to ensure that they meet the above
requirements-
Academics are asked to identify the leading universities for research within
the academic field and region in which they declare themselves to be experts;they are also asked to name the institutions that they regard as the best in the world for research in their field
-
Respondents are asked to identify up to 10 institutions nationally, and 30 institutions internationally-
Respondents cannot vote for their own institution-
QS publishes a detailed breakdown of the country, region, subject specialization, job classification and number of years in academia of all
respondents: http://www.iu.qs.com/university-rankings/academic-survey-responses/The description of the survey made by the anonymous philosopher quoted in your blogpost is inaccurate and the number of universities he/she is quoting to have been asked to indicate does not correspond with what we ask in our survey.
The inaccuracy concerns only the number of institutions the philosopher was asked to name; in every other respect that account is accurate. And, in the interim, I have filled out the QS surveys myself (someone e-mailed them to me), and so I can confirm the accuracy of this description. But the description fails to make clear how bizarre this methodology is: you are asked to recall, off the top of your head, ten to thirty institutions you consider strong in your field. You are given no information at all about any institution--this is just a measure of the "halo effect" of university names that happen to stick in one's memory. (And if you consider there to be no difference in quality between the 10th best domestic institution in your field and the 13th or 14th best, there' s no way to register that.) And, to make matters worse, it's clear that the results are determined by the stupidest evaluators: i.e., the one who forgets, say, to mention Yale as one of the top ten "arts and humanities" universities in the U.S. (As a curious sidenote, U.S. News used to employ a similarly absurd reputational survey methodology in its law school rankings in the 1990s, until they realized that it meant the rank of law schools was determined by the person who forgot to rank Stanford or Columbia in the "top quartile" of American law schools. The methodology that U.S. News repudiated 15 years ago is essentially the same as the survey methodology QS uses today!)
More problematic is that QS does not, in fact, provide a detailed breakdown of the response rates or geographic distribution. QS provides some aggregate data, but from the link above, one can not determine, for example, (1) what percentage of the respondents were in philosophy? (2) what was the geographic distribution of philosophy respondents? (3) what was the distribution of respondents in philosophy in terms of seniority, the academic institution at which they worked, or the sub-field of philosophy in which they work? This information, especially #2, is absolutely crucial to interpreting the results since, for obvious reasons, respondents in Anglophone countries will be more likely to name Anglophone departments, while respondents elsewhere will be more likely to name departments in their parts of the world. All we know from the "breakdown" QS discloses is the percentage of respondents from ten fields (8.6% of respondents were in Biological Sciences at the high, 5% in Economics at the low end). The ten fields listed account for almost 70% of all respondents! It's a reasonable guess, then, that maybe 1% of respondents were in philosophy--that would be 460 philosophy responses total from around the world. How many of those were from the US? How many from Europe? How many from Asia? There's no way to tell from what's on-line. So much for transparency.
Back now to the response from Ms. Bizzozero:
Last year QS run a very small one-off campaign targeting US College/University professors.
The objective was to collect more informed opinions from such a key community. QS utilized a reputable company which specializes in surveying niche target groups.The responses received from such campaign were validated – as it happens with all the responses we receive.
If the anonymous commentator mentioned in your blog did indeed take part in our survey (despite reporting incorrect information about it ), his/her PhD in philosophy and full time employment with a university, would have made him/her a suitable respondent.
I take this to be an admission that QS did, indeed, as my correspondent reported, use a commercial website that pays people to fill out surveys! But we also now see that a "valid" response is one from someone with a PhD employed by a university. As a screening mechanism for useful responses, that does not seem a very discerning one. It's also hard to square with the "breakdown," above, which shows that some number of respondents to the academic reputation surveys were "administrators/functional managers," "admissions officers," "teaching assistants," and "librarian/library assistants." Did they only take responses from managers, teaching assistants and library assistants wiht PhDs?
We trust the vast majority of our respondents give us their unfettered opinion of the institutions they consider strongest in their field. We believe that academics typically place great value on their academic integrity.
The integrity of academics is not what is at issue here, obviously. No doubt most respondents try to respond as best they can to the prompt to name, off the top of their head, universities strong in their areas. The problem is with the integrity of what QS does with this hodgepodge of data.
Regarding the next case illustrated, it’s an invitation sent by a university staff to peers at other universities, inviting academics to sign up for the survey. The QS Intelligence Unit checks every request to participate in the QS Global Academic Survey through the academic sign-up facility for validity and authenticity. Only those who have passed the screening process will be contacted.
So QS admits, as my correspondent claimed, that an assistant to the Rector of a Saudi Arabian science and engineering university, was indeed soliciting philosophers in other Middle Eastern countries to complete the survey! (Why? Probably because they recognize that the more respondents there are from their region of the world, the better their university is likely to do.)
The purpose of the survey is to provide students with a sense of the consensus of informed academic opinion regarding the leading universities internationally, both overall and in a given discipline. There are of course other purposes for which different indicators of university performance might be more appropriate, but the QS academic reputation survey provides students with an important source of information that would not otherwise be available. Furthermore, numerous independent experts have verified its statistical validity. It is not a ‘fraud’ on anyone, merely a useful tool that helps students compare the academic reputation of universities internationally.
For all the reasons already noted, the survey does not provide "a sense of the consensus of informed academic opinion." Its results are an artifact of the halo effect of university names someone can recall off the top of their head, with the least-informed respondents having the biggest effect on the results; of the undisclosed geographic distribution of respondents; of the undisclosed qualification of an unknown number of respondents in each subject area; and of the meaningless aggregation of all this by QS. It is, indeed, a "fraud" on the public.
If you wish to engage in a meaningful debate with the team that compiles the QS rankings and pose them questions you’d like to see answered, I will pass them to the appropriate staff and/or to our Global Academics Advisory Board.
So far, this has been a very meaningful and informative debate, confirming the original doubts about what QS is doing. But herewith some more questions for QS; I will post any answers I receive:
1. Is it true that THES dropped QS as the source for its world university rankings? If so, why?
2. QS uses SciVerse Scopus for its citation data. Where can one find a list of all journals in the SciVerse Scopus database?
3. Will QS disclose the faculty lists it used for purposes of the various citation studies? How were those faculty lists compiled?
UPDATE: The conflicts of interest between IREG and QS get even better: Bob Morse is an official member of the "Global Academic Advisory Board" of QS (a Board made up mostly of non-academics, like Morse!). So the audit approving of the QS rankings was conducted by an organization, IREG, that not only includes an executive committee member who publishes and profits from the QS rankings, but who is, himself, a member of the QS Advisory Board! That's some "independent" audit!
ANOTHER: Reader Michael Bramley writes: : "I wondered if you would be interested to highlight or question the following statement from the QS PR dept in your latest blog post: 'Furthermore,
numerous independent experts have verified its statistical validity.' You would think this means experts in the statistics surrounding these sorts of surveys etc. but who knows!" A good question. Richard Holmes writes with some other observations about the QS rankings:
[O]ne feature of the QS survey is that responses are recycled for three years if not overwritten in a later survey so it is quite possible for someone who died after filling out the form to have his or her response recorded for another three years.
Another feature is that there is a substantial weighting given to cross-border responses but QS do not indicate whether high survey scores for some Asian universities, for example, are a result of international recognition or respondents adding a couple of local schools after writing down the real research leaders.
Then there are the other indicators. Something very odd happened to the employer review in 2011. The number of international students seems to have no relationship with quality of any kind.
Recent Comments