Here. I do not recall any national association of scholars reacting this way to prior NRC reports (though, of course, prior ones were straightforward surveys of experts evaluating faculty and program quality). Here's an excerpt from the CRA statement:
CRA has serious concerns about the accuracy and consistency of the data being used in the evaluation of the Computer Science discipline.
CRA has identified a number of instances in which data were reported under different assumptions by institutions, leading to inconsistent interpretation of the associated statistical factors.
CRA has further identified a number of instances where the data is demonstrably incorrect - sometimes very substantially - or incorrectly measures the intended component.
CRA is pleased that the NRC acknowledges there are errors in the data used to evaluate computer science departments and that, in the words of NRC Study Director Charlotte Kuh, “There’s lots more we need to look at for computer science before we really get it right.”
CRA will continue to work closely with its member departments and the NRC to help correct these errors and determine more suitable data sources for the evaluation.
What are the odds these problems are limited to computer science? More strikingly, Ms. Kuh, who apparently bears the primary responsibility for this baroque and senseless methodology, acknowledges that the NRC rankings in computer science are worthless.
Recent Comments