CLICK TO READ MORE FROM:
One of the biggest problems, according to researchers at Oregon Health & Science University, is a lack of basic instructions for duplicating experiments. Their study, published in the journal PeerJ in 2013, examined the methods sections of several hundred articles from more than 80 journals and found that almost half of the articles fell short in identifying all of the materials used. They also noted that methods sections had no standard guidelines and varied from one journal to the next, and were often affected by space limitations.
Ellis notes another hurdle to replication: the failure to include negative data in papers. Journals don’t like to publish flawed data, but knowing an experiment sometimes failed, and why, could help other researchers when they run into trouble.
Several prominent journals, including Nature, Science and Science Translational Medicine, are now adopting guidelines to ensure the disclosure of all technical and statistical information that is crucial for reproducibility. Nature now provides more space for methods information and requires more precise information from authors. And to publish in Science, senior authors must sign off on a paper’s primary conclusions. The peer review process is also being scrutinized, with the aim of “increasing transparency,” particularly in analyzing researchers’ statistical measures, says Meagan Phelan, a spokesperson for the American Association for the Advancement of Science, which publishes Science.
Meanwhile, the Reproducibility Initiative has received $1.3 million in funding from the Laura and John Arnold Foundation to replicate key findings from 50 landmark cancer biology studies. The foundation is also financing a related effort, the Reproducibility Project, which Brian Nosek helped establish, that is bringing together more than 180 academic psychologists through a network called the Center for Open Science to replicate 100 papers published in three prominent journals.