But before testing whether the drug could help human Alzheimer’s patients, scientists needed to make sure it worked in mice—and while some labs were able to reproduce the drug’s effect on memory and cognition, the plaque reduction effect couldn’t be replicated. In four technical comments published in Science last May, several teams of independent researchers, including those co-authored by Sisodia and Tanzi, reported that their replication attempts showed no effect on plaque levels in lab mice treated with bexarotene. Months later, in August, a paper in the journal Molecular Neurodegeneration reported that researchers from Johns Hopkins had also failed to replicate either the plaque reduction or memory and cognition effects found in the Case Western research.
For anyone hoping to see progress in the fight against Alzheimer’s, the failure of the follow-up research was disappointing. But it also points to the necessity and challenges of independently validating published research findings. While reproducibility is considered a bedrock of scientific discovery, there has been growing concern about the quality of recent studies. “Data reproducibility means that the seminal findings of a paper can be reproduced in any qualified lab that has appropriate resources and expertise,” says Lee Ellis, a surgeon and researcher at the University of Texas MD Anderson Cancer Center in Houston. “If you try to reproduce all of the findings in a paper, you’re likely to find some divergent outcomes, but the point of the paper should remain the same.”
But Ellis and others who have explored these issues have found that medical research, including seemingly groundbreaking work, is reproducible less than half the time. “The unspoken rule is that at least 50 percent and more like 70 percent of the studies published even in top-tier academic journals can’t be repeated,” says Bruce Booth, a partner at Atlas Venture, a venture capital firm in Boston. “Everyone recognizes reproducibility as a big problem,” says Elizabeth Iorns, a cancer researcher in Palo Alto, Calif., and chief executive of Science Exchange, an online marketplace for scientific resources and expertise.
Many factors contribute to the low odds of reproducibility. The original experiments may have been poorly designed, or there could be problems with how results were analyzed. The trend may also be a symptom of a scientific community in which the job market and funding are tighter than ever, and researchers must publish or perish, leading to a lack of rigor in their research. “It is a dysfunctional scientific climate,” says Ferric Fang, a professor at the University of Washington School of Medicine and editor-in-chief of the journal Infection and Immunity. And because journals favor original research, scientists have little incentive to pursue replicative work.