Complicating debates about the reasons for low rates of replication is uncertainty about the magnitude of the problem. In a 2005 essay in Public Library of Science (PLOS) Medicine, John Ioannidis, an epidemiologist and professor at Stanford School of Medicine, argued that most published research findings are false, and used statistical models to underscore issues with how studies are conceived and designed. In 2009, Ioannidis and colleagues zeroed in on the repeatability of 18 studies of gene expression published in Nature Genetics in 2005 and 2006. Insufficient data made replication impossible for 16 of the papers.
In 2011, German pharmaceutical company Bayer HealthCare reported in the journal Nature Reviews that its scientists had been unable to reproduce nearly three-quarters of 67 published studies in cardiovascular disease, cancer and women’s health. In most cases, the inability to replicate results led to the termination of research efforts, a trend that may help explain why success rates for clinical drug trials have been declining. “Bayer HealthCare has become more cautious when working with published research targets,” says Khusru Asadullah, head of global biomarkers at Bayer’s Berlin headquarters and an author of the Nature Reviews article. “Targets now have to be better validated internally before we start big projects.”
Last year, Lee Ellis of MD Anderson and C. Glenn Begley, former head of global cancer research at pharmaceutical company Amgen, chronicled in the journal Nature how Amgen scientists attempted to replicate 53 landmark cancer studies and found that they could confirm only six. The scientists even consulted with the original investigators, who in some cases were unable to repeat their own experiments. But because Amgen investigators were bound by confidentiality agreements, the paper left many unanswered questions. “They didn’t reveal a list of which studies they couldn’t reproduce,” Fang says.
Begley, now chief scientific officer at TetraLogic Pharmaceuticals, has since provided more details, and he published his analysis, “Six Red Flags for Suspect Work,” in Nature in 2013. “If researchers got the results they liked in the first experiment, they usually didn’t repeat it,” Begley says. Much of today’s research isn’t fudged, he says, or fraudulent: “It’s lazy and sloppy.”
New research by Ellis and a team at MD Anderson published in PLOS ONE in 2013 provided yet another perspective on the reproducibility problem. They reported that half of more than 400 respondents at the institution said they had been unable to replicate at least one published study. Seventy-eight percent of the scientists had attempted to contact the authors of the original scientific paper to identify the problem, but only one-third received a helpful response. More than 40 percent reported difficulties finding an outlet to publish findings that contradicted previous results. Such problems increase the likelihood that “suspect findings may lead to the development of entire drug development or biomarker programs that are doomed to fail,” the authors wrote.