Silverman, summarizes a Nature article (the Nature editorial makes a good read, too) as centered around the work of two cancer researchers who reviewed "landmark papers" that were published in leading journals and emanated from reputable laboratories. They observed (a) an overall poor quality of published preclinical data and (b) that the vast majority of this work -- nearly all of it -- could not be replicated.
Bradpalm1 subsequently followed up Littlebits' entry with a link to Derek Lowe's weblog column at Corante on the Nature article entitled Sloppy Science. Lowe's comments are direct, including "I think that this problem has been with us for quite a while, and that there are a few factors making it more noticeable: more journals to publish in, for one thing, and increased publication pressure, for another...But there's no doubt that a lot of putatively interesting results in the literature are not real."
Lowe also links to a Reuters article that provides a damning anecdote:
Part way through his project to reproduce promising studies, Begley met for breakfast at a cancer conference with the lead scientist of one of the problematic studies.
"We went through the paper line by line, figure by figure," said Begley. "I explained that we re-did their experiment 50 times and never got their result. He said they'd done it six times and got this result once, but put it in the paper because it made the best story. It's very disillusioning."
Such selective publication is just one reason the scientific literature is peppered with incorrect results.
Publications don't necessarily lead to good products and businesses, or, by themselves, are [clearly] not predictors of good or great outcomes. Repeated and reproduced results generally do and are.
No comments:
Post a Comment