Comments on my 'How Many Universities?' post included interesting reflection on how many people with PhDs should a subject like Philosophy produce. The discussion concentrated on whether there is, or ought to be, a prospect of gainful employment for everyone with a PhD, and if, not, whether it is irresponsible to encourage so many people into graduate work.
This is an instance of a broader question concerning waste and risk, although I realise that this might be thought a rather distasteful way of putting things. In the current context the question is this: given that it is unlikely that we will design a regime such that ALL AND ONLY the best philosophers do PhDs, should we aim at a regime where ONLY the best philosophers do PhDs, or a regime where ALL the best philosophers do PhDs? The former system is one where we don't accept people unless we are pretty sure there will be a job for them, and thereby risk missing some really good ones whom we have failed to spot; the latter says that we should accept a larger group to make sure we don't miss out on anyone who turns out to be really good, even if the cost is that we take many unsuitable people. One might argue that the latter - many more PhDs - is better for the advancement of the subject, but the former is better for the individuals involved. This is why, I think, that it is not an easy issue to solve.
In research terms, the issue comes to this. Should we we make sure that we don't waste money, and fund only realistic, deliverable projects, thereby denying funding for more speculative work which has a serious risk of failure but might have greater rewards, or should we be prepared to risk wasting money in order to aim at really valuable results. I have written on what I think is the overly conservative approach taken by the UK Arts and Humanities Research Council. A couple of days ago my UCL colleague in Science and Technology Studies, Donald Gillies, sent me a paper arguing at length a similar criticism of the Research Assessment Exercise. The essence of Gillies' argument is:
Statistical tests are said to be liable to two types of error (Type I error, and Type II error). A Type I error occurs if the test leads to the rejection of a hypothesis which is in fact true. A Type II error occurs if the test leads to the confirmation of a hypothesis which is in fact false. Analogously we could say that a research assessment procedure commits a Type I error if it leads to funding being withdrawn from a researcher or research programme which would have obtained excellent results had it been continued. A research assessment procedure commits a Type II error if it leads to funding being continued for a researcher or research programme which obtains no good results however long it goes on. This distinction leads to the following general criticism of the RAE. The RAE concentrates exclusively on eliminating Type II errors. The idea behind the RAE is to make research more cost effective by withdrawing funds from bad researchers and giving them to good researchers. No thought is devoted to the possibility of making a Type I error, the error that is of withdrawing funding from researchers who would have made important advances if their research had been supported. Yet the history of science shows that Type I errors are much more serious than Type II errors.
Gillies' paper is available here.