Every few months, the media catches wind of a new scientific discovery and headlines everywhere pronounce that the world’s problems have been solved. The cure for autism, cancer, depression, and other maladies; the proposed results sound promising and exciting, but upon reading further into the literature, implications of the data become more and more hazy. Maybe a certain drug was found to work on mice, but not any other animal. Or a trend was seen in observational studies, but the sample size was too small to be significant. Perhaps data was generated by one group, but could not be reproduced by others. A historically common mistake also occurs when the cell lines researchers think they are working with turn out to have been contaminated by HeLa cancer cells long ago. (Though this is usually discovered much later.) When such an event occurs, other academic researchers typically do not pick up the work and as the splashy proclamations slowly fade away, both monetary funds and precious time are wasted. However, in more serious cases results can be fabricated or doctored, and this type of misconduct is not always caught when subjected to peer-review. A notorious example is the stem-cell scandal that occurred in January 2014, when a researcher claimed to ‘convert mouse cells to an embryonic-like state’ in a Nature paper later found to be partially plagiarized, as well as containing manipulated figures. Furthermore, despite the actual science being contrary to current thought, many scientists admitted to being biased towards it simply due to the reputation of the authors of the paper. This event stimulated worldwide discussions on research integrity and how to prevent such a catastrophe from reoccurring. However, it should be noted that research integrity violations continue to transpire under the radar and more importantly, many are unintentional.
As researchers we work constantly to optimize our protocols, whether for delicate instruments that require endless tinkering (i.e., a mass spectrometer) or figuring out the best growth environment for bacteria or mammalian cells, we are always trying to make our experiments better. But more often than not, each individual research group tends to have their own protocols and techniques, even across groups that perform similar experiments. The variance between methods and protocols can be small or large. But it can be said with certainty that no two labs share the same exact research environment and to ensure that data produced is not specific to one group, research in its entirety needs to be optimized. Publications are currently at a high point, with 25,000,000 papers published in 1996-2011 alone, but the scientific value of such papers are called into question. In the 2005 essay ‘Why Most Published Research Findings are False,’ epidemiologist John Ioannidis studied the statistical probability of research being valid and concluded that “for most study designs and settings, it is more likely for a research claim to be false than true.”
To help combat this, Ioannidis and other researchers recently founded the Meta-Research Innovation Center at Stanford (METRICS) to identify methods to help improve the quality of scientific results and publications, through studying how research is performed and how it can be improved. In Ioannidis’ recent essay titled ‘How to Make More Published Research True,’ Ioannidis states that, “many published research findings are false or exaggerated, and an estimated 85 percent of research resources are wasted.” He then proposes practices to attempt to attenuate falsified data, which include “large-scale collaborative research; adoption of replication culture; registration of studies, protocols, analysis codes, etc.; reproducibility practices; more appropriate statistical methods; more stringent thresholds for claiming discoveries or ‘successes.’ Notably, Ioannidis also suggests that the scientific reward system should be modified. The value of a scientist is often measured by publications (plus subsequent citations) and grants, and in order to obtain these two items researchers are encouraged to publish novel, groundbreaking work instead of focusing on optimization or replication of previously obtained data. In short, biomedical research needs a facelift. Hopefully, the launch of METRICS, as well as the Reproducibility Index and new National Institute of Health (NIH) policy, will get us there.