Biocurious is a weblog about biology, quantified.

Experimental replication

by PhilipJ on 27 July 2006

This week’s Nature has a news feature on experiments that are challenging to replicate independently in other labs (subscription required), which discusses how widespread replicability is (by tracking 19 papers published in a single issue of Nature in 2002), to when a result goes from unreplicated to unreplicable, either because of fraud or an unintentional mishaps which occurred during experimentation, and how journals could make it easier on experimenters to reproduce results.

The good news was that the majority of the 19 papers were replicated in some fashion, and only two were still causing a stir in the scientific community, and in neither case is fraud suspected. A comment that surprised me, however, was the mention that basically a third of all published papers receive a total of zero citations, meaning a huge amount of work goes unreplicated due to lack of interest. Boring science seems to be a big problem.

One of the suggestions proposed in the feature is that journals should publish more papers detailing failed attempts at repeating an experiment. When contacted, however, few scientists were actually enthusiastic about the idea, and wanted to keep the threshold of evidence required for publishing a failure to observe the same effect high. Failure to reproduce a result is usually attributed to the incompetence of the experimentalists, and not necessarily a measure of whether the experimental results are real or not.

Another suggestion is to take the methods sections of papers more seriously. Supplemental information is here to stay, but the editorial standards of the rest of the paper should be applied to the information contained in methods and supplemental, online-only sections. Says Michael Ronemus of the newly launched Cold Spring Harbor Protocols, a journal exclusively covering experimental protocols,

When I started in science I was told that I should be able to repeat an experiment by reading the paper, but that is almost never the case.

Coming from the Nature publishing group, Nature Protocols is dedicated to the same topic, and to be included is a troubleshooting section detailing (I presume) the difficulties the authors overcame to get an experiment working. Both the CSHL and Nature journals will offer some kind of discussion forum, too.

I’ve already mentioned PLoS ONE, but it gets a nod in the feature as a journal which can include both of the above; as long as the methods used are sound (in the case of trying to reproduce an experiment, I imagine it means simply following a previous papers’ methods as closely as possible), failing to reproduce a result is as valid a result as any for a journal aiming to publish any and all methodologically sound science. With the associated discussion forum that the PLoS ONE website mentions, discussion over methods of a published article should be easy to initiate.

Finally, the arXiv gets a nod, too, for it’s embracing of the blogosphere. Having recently introduced a trackback mechanism (though not without some drama), bloggers can reference an arXiv pre-print and have this information get linked from the pre-print itself.

The feature ends with a short anecdote: Peter Medawar, a Nobel prizewinning immunologist complained that science papers were all fraudulent, as they “describe research as a smooth transition from hypothesis through experiment to conclusions, when the truth is always messier than that”. By allowing comments and discussions outside of the published paper itself, the messy truth may become a little more clear to those not directly involved with the published work, and will hopefully make it easier for others to find out the details required to succeed in replicating the results.



Name
Email
http://
Message
  Textile help