Skip To Content
Cambridge University Science Magazine
The scientific method is a collection of techniques for investigating phenomena, based on systematically gathering empirical and measurable evidence for a stated hypothesis. This methodology is designed to be objective and attracts people aspiring to uncover the true state of nature. Why then, has science become so riddled with bias? Although it is a field composed of critical thinkers devoted to truth seeking, many scientists are in agreement that the system is flawed. The question now is how do we fix it?

For young scientists, the whole process of running experiments and publishing them often seems intriguing and fun. The challenge of out-smarting a colleague can be exciting and the trust that there are bodies and procedures in place, such as anonymous peer-review, to catch errors in methodology and interpretation, encourages confidence in the system. Unfortunately, it seems that the more time spent in academia, the more likely one is to become disappointed with the system. The prevalence of untrustworthy and biased science is troubling and ranges from minor data adjustment to extreme, and sometimes publicised, cases of fraud. Along with wasting resources on failed attempts to replicate previous findings and having potentially dangerous consequences, particularly in the case of biomedical research, these incidents distract attention away from the countless scientists dedicated to conducting objective and honest research.

An illustration of the growing concern about the current state of science is the fact that one of the most downloaded technical papers from the journal PLOS Medicine—with over 800,000 views—is an essay by John Ioannidis (2005), entitled: “Why most published research findings are false”. Using simulations, Ioannidis argues that the majority of published research findings are more likely to be false than true. This is a frightening claim for anyone who understands that research is designed to build on what is already ‘known’ or rather what has been published. If the majority of what is published is in fact not true, then these false claims snowball into further misunderstanding of the true state of nature.

It is important to recognise that much of the bias is introduced by well-meaning individuals, without malicious intentions. Of course, blatant cases of fraud do exist, when even prominent researchers fabricate data, but this is extremely rare. The majority of bias is introduced sometimes unintentionally during data collection, or through selective or distorted reporting of results. The problem of misrepresented data, resulting in the inability to later replicate published findings, appears to be so widespread that some of the fault must be due to a fundamental flaw in the scientific process.

The current system is designed in such a way that by putting so much pressure on scientists to get statistically significant data, it may actually be indirectly encouraging bad science. A scientist’s career and funding are almost entirely determined by what they publish, and in which journals. This, compounded by the fact that there are so few academic jobs and research grants, makes competition extremely strong. When one’s career depends on the outcome of experiments, it is understandable that a scientist is willing to do almost anything to make sure their results are publishable, even though they are aware that allowing for bias is an unethical and shortsighted approach.

So what can be done? How can the system be improved in a way that encourages sound research methodology, instead of just rewarding ‘significant’ results? These issues are a hot topic of discussion among scientists and there are a few possible changes that could be made.

First, we must encourage journals to move away from the policy that only positive results are good enough for publication. If a series of experiments were conducted and interpreted in a methodologically sound way, a negative result can be just as informative as a positive one. This policy feeds the problem referred to as the ‘file-drawer effect’, where many studies get conducted but not reported because their outcome was negative. If some studies show positive results, these are selectively published, and will bias the overall literature in support of the hypothesis being tested. The effect also makes it difficult to know if someone has already attempted a particular experiment because the results may not have been published. Moving away from a positive results criterion for publication would help reduce this publication bias and allow the literature to provide a more accurate description of phenomena. Recently, some journals have started to not only accept but to invite negative results, such as the All Results Journals and F1000Research. Nevertheless, more needs to be done to encourage more journals to appreciate that negative outcomes are as important to report as positive ones.

Second, it is worth considering whether an online registry for experiments should be implemented for all fields, similar to clinical trial registries used in medical research. In 2005, the International Committee of Medical Journal Editors decided that no clinical trials would be considered for publication unless they were reported on the registry. Requiring researchers to submit their experimental design and hypothesis to a central database prior to collecting data would increase transparency and act as a safeguard against selective reporting, which will help to reduce the file-drawer problem. The database could also help researchers identify others who have already started working on a particular set of experiments, allowing them to choose to form a collaboration or restart with something novel. This would be an extreme move for all research fields, and would definitely burden researchers with an additional bureaucratic task, but it might be worth considering a similar but simplified type of registry for some fields.

Third, all fields should consider utilising open access archives for pre-published papers, similar to those used in physics, mathematics, and astronomy. For example, arXiv.org provides a platform for scientists in certain fields to upload their research prior to being published in a scientific journal. Online archives that provide free, worldwide access to their content ameliorate the file-drawer problem, because all results can be submitted, regardless of the outcome. By reducing the pressure on scientists to get positive results, it may help to keep findings unbiased and objective. In some cases, influential papers have been published exclusively on an archive without ever appearing in a traditional journal. Arguably, a potential downside is the lack of peer-review, which is intended to serve as a safeguard against poor quality papers. However, such archives invite the readership and community to comment on submitted research, which may overcome this problem. Once funding bodies and employers consider archive submissions comparable to journal publications, these open access archives have the potential to replace traditional journals.

Finally, institutions should require students and employees to take research ethics and methodology seminars. Ultimately, the source of the problem lies at the level of the individual researcher. We can only benefit from open discussion of the importance of scientific integrity and of the issues that undermine the foundation of science. It is also likely that having a more thorough understanding and respect for research methodology and statistical data analysis would reduce its misuse.

As a young scientist, it is hard not to become discouraged, but it is important not to lose sight of the fact that there continues to be groundbreaking research, leading to robust scientific theories with huge impacts on our understanding of nature. There is a lot of exciting science going on, and numerous scientists who are committed to the truth—the task now is to improve the system. Although daunting, institutions including the University of Cambridge are starting to make policies, such as requiring all research publications to be freely available to the public, aimed at correcting shortfalls in the system. We must now keep the momentum moving forward and find effective ways to reward sound scientific methodology, instead of only positive results.

Brianne Kent is a 3rd year PhD student at the Psychology Department