THURSDAY, 22 MAY 2014Publicly funded research in the UK is generally structured according to the Haldane principle. This assumes that it is researchers themselves who are best positioned to choose which research to fund. On one level this seems fairly intuitive. Experts in their fields are best equipped to decide which the most promising research to fund is. On another level, it is arguably a bit odd and undemocratic to give control of significant sums of taxpayers’ money to a small, unelected, and unrepresentative group of people. Experts may select promising research, but it may not be the line of research that the electorate most want pursued. There are some very interesting issues here which arise from the inherent tension between democracy, which should give all an equal voice, and expertise, which privileges a very few. I won’t go into them here, but I recommend Philip Kitcher’s book Science, Truth, and Democracy for those who want to read more.
The Haldane principle is embodied in the UK Research Councils: seven publicly-funded agencies which co-ordinate funding in their particular areas, and between them cover the arts, humanities, social sciences, science, engineering, and medicine. The councils receive their funding from the Department of Business, Innovation, and Skills, so the quantity of funding is determined by government, but the research councils are independent and in full control of its allocation. Research proposals are judged by peer review committees established by the councils.
Such a system prioritises research excellence rather than any particular line of research. The resultant research should be diverse, but of uniformly high quality – exactly the kind of publicly funded research environment we would want to see if we fully accept the arguments in my previous post. But there is a little bit more to it than that. I’ve just popped on to the website of the Science and Technologies Facilities Council, and had a look at their assessment criteria for research proposals. Right at the top is scientific excellence. Also included are productivity of investigator, productivity of grant supported staff, quality of leadership, and suitability of institution – all things which predict the quality of the output. But there are also a few other criteria, including potential for economic impact and quality of pathways to impact section. These criteria favour a narrower set of research – that which can prove its economic and social worth.
This trend is replicated in the Research Excellence Framework (REF), the new system for assessing the research output at higher education institutions, which is being carried out this year. A significant part of this assessment will be based on impact, and the journals in which research is published. However, journal impact factors are crude and misrepresentative metrics, and they favour research which fits into the disciplinary boundaries and norms (for a good article against the use of these impact factors, with links to other articles, see here). Hence interdisciplinary research, and potentially novel lines of research within disciplines, may be suppressed. Yet it is these types of research which often hold the greatest potential for genuinely novel solutions and technologies, and so these types of research which may drive continued economic innovation and social benefit.
The Higher Education Funding Council has just launched a review into research metrics. It’s led by James Wilsdon, who is not a fan of crude impact factors. It will be interesting to see what the review comes up with. I do think some metrics are needed. Occasionally I’ve heard comments in the academic community along the lines of ‘Why does research need a point? Can it not have an inherent value?’ Such arguments frustrate me. Of course research can have an inherent value, but researchers should remember that they are being funded by the British public, and such a view is rather narrow minded and elitist. Not every piece of research needs to produce a commercial application or cure for cancer, but neither should research simply be someone’s taxpayer funded hobby. I believe that excellence and diversity are probably the means to ensuring the optimum long term benefit for society from publicly funded science, but the truth is that the empirical evidence by which to judge this claim is lacking. Expert opinion is not infallible. In conjunction with suitable metrics, it will provide a far more accountable and responsive system for funding allocation.
Any new metrics need to be nuanced and applied carefully. I’ve been at a couple of talks and discussion groups recently where issues of publication bias, replicability, and scientific fraud have been raised. The general consensus seems to be that the growing pressure to publish in high impact journals has led to negative results not being reported, and experiments not being repeated and so checked, because this sort of research rarely makes it into the top journals. Additionally, the top journals tend to publish research findings which have a large effect size, while smaller effect sizes in the same field are limited to lesser read journals. In extreme cases, the pressure to publish can lead to fraud. There is an ongoing investigation into a prominent paper in Nature, since withdrawn, which claimed that cells can be reverted to pluripotency by being immersed in acid, and there have been several high profile cases of results being manipulated in social psychology recently. These trends suggest an even more worrying problem associated with crude economic and impact metrics than a resulting lack of diversity: a distortion of published scientific results away from an accurate description of nature. This is clearly bad news, but at least these issues are being widely discussed. Such conversations have encouraged recognition of the need for change, and I look forward to seeing what new frameworks and metric are proposed. Whatever they are, they must recognise the diversity of beneficial impacts across short and long timescales, they must not distort scientific practice itself, and they must balance the need for heterogeneous and risky science with an obligation to serve the public. For me, science is thrilling not only because it offers a means to produce an accurate knowledge of nature, but also because this growing knowledge can benefit society. Let’s make sure that this remains the case.