Skip to main content
Reproducibility in Today's Science
May 1, 2014

 

Recent advances in science have led to longer lives, better health care, and healthier societies. Science develops through trial and error. However, irreproducibility still remains a problem, with issues related to bias, a high level of competition, a lack of appropriate statistical analysis, and preference to publish novel and positive findings instead of negative or additive results.

Irreproducibility issue

One of the pillars of scientific work is its reproducibility. Repeated proofs with a statistically significant number of experiments that show the finding is true under defined conditions further consolidate its acceptance by the scientific community. Positive findings often need reproducibility from different scientists and laboratories to be accepted as scientific fact. This does not mean, however, that they are completely discredited when they cannot be reproduced. Each bit of research is significant and worth praise by itself.

Scientists often do not find the means to repeat somebody else’s work given that large amounts of effort and money need to be spent. The practical way to test someone else’s findings usually happens through industries that could gain commercially from a certain finding. Unfortunately, this often comes with a price, even if research is found to be irreproducible. Bruce Booth, a venture capitalist, states that at least half of the findings in publications cannot be reproduced in industry settings. This might be an underestimation, given recent reports from different biotech companies.

According to various reports the reproducibility of published articles, in biomedical sciences for instance, is as low as 10-30 percent. Amgen, a biotech company with a team of around 100 scientists, tried to reproduce the results of 53 key cancer research articles in top journals, and found only 6 of them were reproducible (11% reproducibility). Bayer’s scientists found that only 14 of 67 projects related to oncology, cardiovascular medicine, and women’s health were reproducible (21% reproducibility).

Another example is the PsychFileDrawer project related to experimental physiology, which showed only 6 out of 21 articles were reproducible (28% reproducibility). In addition, one study to reproduce findings drawn from 18 articles regarding gene microarray analysis found in Nature Genetics largely failed. Similarly, when protein samples of identical proteins, as shown in article by Bell et al (2009), were sent to different laboratories, those labs failed to reproduce the initial findings. Dr. Asadullah, Vice President and Head of Target Discovery at Bayer, points to the increased trend of failures in Phase III trials, which also suggests a serious problem in the way research is vetted and published.

These numbers regarding the reproducibility raise an important question regarding the system of scientific research. If these alarming findings are true, it could mean that 70-90% of money spent on science does not bring about anything that is reproducible. It is intriguing that although the technologies used for scientific discoveries are now much improved, which would be expected to make date more precise and rigorous, the opposite is happening. This tends to suggest that the issue is not with the technology, but with the attitude and atmosphere of the larger scientific community.

Table 1: Major issues regarding today’s science

Issue

Comments

How much of science is reproducible?

 

Some studies have shown that only 10-30% of articles in certain fields can be reproduced.

How much of science is reliable?

 

It is as reliable as how scientists interpret results.

Does publishing in high impact journals make it reliable?

Not always. There are many retractions from high impact journals (There has been a 10-fold increase in the last 10 years).

Do claims from 2 different (and sometimes up to 10) publications or groups make it reliable?

Not always. It depends on how independent groups are, if they want to avoid gaining enemies, or have similar biases.


Publish or perish

A scientific publication is not only a tool for the transfer of knowledge between scientists, but also the basis for promotion and the awarding of grants. A rule in scientific communities regarding finding money to perform research is “Publish or Perish.” This refers to the cycle of publishing a high impact paper to get a grant, or losing one’s ability to compete with other labs.

The bar for success in the scientific community is set very, very high. This forces fledgling scientists to publish quickly and in high impact papers. Remarkably, in the major science journals, there has been a 10-fold increase in the retraction of articles in the last ten years, while there has only been a 1.4-fold increase in the number of journals published (Fang et al, 2012, Misconduct accounts for the majority of retracted scientific publications, PNAS).

Selective publication issue

Providing the best figures for publication, and selective interpretation of data, can also lead to incorrect results within scientific literature. A lack of blind experiments in basic science also plays a role in these mistakes. Some researchers may design their experiments, know what the control and experimental groups are, and may have biases in terms of wanting their hypothesis to be correct – thus they may interpret their data in favor of their idea.

Dr. Begley, head of global cancer research at Amgen, tells of a case where they tried to reproduce scientific findings in a paper but failed to reproduce the findings, even after 50 trials. Finally, they went to the author of the article and discussed how to solve this issue. Dr. Begley learned that the authors had tried the experiment only 6 times and it worked only once, but they put it in the paper since it made the best story. This might be seen as an extreme case, but similar selective publication of “good data” is not uncommon.

Scientists are under a great deal of pressure to publish the “best story” – both to advance their own careers and to fulfill their own competitive ambitions. This is increasingly becoming an important issue, and it stresses the need for verifying the results of others and publishing findings that negate previous publications.

Lack of incentive for verification

A majority of journals require authors to publish novel and positive findings and disregard negative findings and repetitive studies. There are a few journals that publish negative results, such as the Journal of Negative Results in Biomedicine, the Journal of Pharmaceutical Negative Results, and the Journal of Interesting Negative Results, etcetera. But publishing negative findings is still a low priority for most publications, which seek new and exciting discoveries to attract more attention. But a renewed focus on verifying existing science, and publishing negative results, will create more reliable scientific literature.

Another issue is the lack of incentives for verification. Since the easiest way to get funding is novelty and publishing in high impact journals in the current hypercompetitive environment, there is almost no room for verification, if any. Under these conditions, many scientists tend to write up their own observations and seek to get published with little concern for reproducibility. Why do they need to find out whether it is wrong or not? If it is wrong, this would make it tougher for them to get funding.

But is this proper scientific conduct? Given these issues within the community, how can we be certain their research and its findings are sound?

Given the potential amount of misinformation in scientific literature, we need a scientific renaissance. We could have a reward system, such as funding or recognition that improves reproducibility, robustness of data, and an increased publicity of data and protocols, as Dr. Ioannidis suggested. Other approaches to increase reliability could be decreasing the requirements for getting grants, and providing a reproducibility parameter in the citation index.

Toprak is a freelance writer, living in Texas, studying Genetics.

References

  1. In cancer science, many "discoveries" don't hold up. Retrieved from http://www.reuters.com/article/2012/03/28/us-science-cancer-idUSBRE82R12P20120328 on 4/28/13.
  2. Reliability of ‘new drug target’ claims called into question. Retrieved from http://blogs.nature.com/news/2011/09/reliability_of_new_drug_target.html on 4/28/13.
  3. Retrieved from http://www.psychfiledrawer.org/view_article_list.php on 4/28/13.
  4. John Arrowsmith. Trial watch: Phase III and submission failures: 2007–2010. Nature Rev. Drug Discov. 10, 87; 2011.
  5. Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLoS Med 2(8): e124. doi:10.1371/journal.pmed.0020124
  6. Ioannidis et al. 2009. Repeatability of published microarray gene expression analyses. Nature Genet. 41, 149–155; 2009
  7. Fang FC, Steen RG, Casadevall A (2012) Misconduct accounts for the majority of retracted scientific publications. Proc Natl Acad Sci U S A 109: 17028–17033. doi: 10.1073/pnas.1212247109
  8. Bell et al. 2009. A HUPO test sample study reveals common problems in mass spectrometry–based proteomics. Nature Methods 6, 423–430; 2009
  9. Bruce Booth. Academic bias & biotech failures. Retrieved from http://lifescivc.com/2011/03/academic-bias-biotech-failures/#0_undefined,0 on 4/28/13.