Nature Editorial: To ensure their results are reproducible, analysts should show their workings.

See Devil in the Details, Nature, Volume:470, Pages: 305–306 , 17 February 2011.

How many aid agencies could do the same, when their projects manage to deliver good results? Are there lessons to learned here?

Article text:

As analysis of huge data sets with computers becomes an integral tool of research, how should researchers document and report their use of software? This question was brought to the fore when the release of e-mails stolen from climate scientists at the University of East Anglia in Norwich, UK, generated a media fuss in 2009, and has been widely discussed, including in this journal. The issue lies at the heart of scientific endeavour: how detailed an information trail should researchers leave so that others can reproduce their findings?

The question is perhaps most pressing in the field of genomics and sequence analysis. As biologists process larger and more complex data sets and publish only the results, some argue that the reporting of how those data were analysed is often insufficient. Continue reading “Nature Editorial: To ensure their results are reproducible, analysts should show their workings.”

“The Truth Wears Off” & “More Thoughts on the Decline Effect”

These two articles by Jonah Lehrer in the New Yorker (13/12/2010, 03/01/2011)  provide a salutary reminder of the limitations of experimental methods as a means of research, when undertaken by mere mortals, with their various limitations. They do not, in my reading, dismiss or invalidate the use of experimental methods, but they do highlight the need for a longer term view about the efficacy of experimental methods, and one which situates their use within a social context.

Many thanks to Irene Guijt for bringing these articles to my attention, and others, via the Pelican email list.

“In “The Truth Wears Off,” Lehrer “wanted to explore the human side of the scientific enterprise. My focus was on a troubling phenomenon often referred to as the “decline effect,” which is the tendency of many exciting scientific results to fade over time. This empirical hiccup afflicts fields from pharmacology to evolutionary biology to social psychology. There is no simple explanation for the decline effect, but the article explores several possibilities, from the publication biases of peer-reviewed journals to the “selective reporting” of scientists who sift through data”

In “More Thoughts on the Decline Effect” Lehrer responds to the various emails, tweets and comments posted in reply to his article. Amongts other things, he says “I think the decline effect is an important reminder that we shouldn’t simply reassure ourselves with platitudes about the rigors of replication or the inevitable corrections of peer review. Although we often pretend that experiments settle the truth for us—that we are mere passive observers, dutifully recording the facts—the reality of science is a lot messier. It is an intensely human process, shaped by all of our usual talents, tendencies, and flaws”

There are also many interesting Comments following both posts by Lehrer

%d bloggers like this: