Crowdsourced research: Many hands make tight work

Crowdsourced research: Many hands make tight work, Raphael Silberzahn& Eric L. Uhlmann, Nature, 07 October 2015

Selected quotes:

“Crowdsourcing research can balance discussions, validate findings and better inform policy”

Crowdsourcing research can reveal how conclusions are contingent on analytical choices. Furthermore, the crowdsourcing framework also provides researchers with a safe space in which they can vet analytical approaches, explore doubts and get a second, third or fourth opinion. Discussions about analytical approaches happen before committing to a particular strategy. In our project, the teams were essentially peer reviewing each other’s work before even settling on their own analyses. And we found that researchers did change their minds through the course of analysis.

Crowdsourcing also reduces the incentive for flashy results. A single-team project may be published only if it finds significant effects; participants in crowdsourced projects can contribute even with null findings. A range of scientific possibilities are revealed, the results are more credible and analytical choices that seem to sway conclusions can point research in fruitful directions. What is more, analysts learn from each other, and the creativity required to construct analytical methodologies can be better appreciated by the research community and the public.

The transparency resulting from a crowdsourced approach should be particularly beneficial when important policy issues are at stake. The uncertainty of scientific conclusions about, for example, the effects of the minimum wage on unemployment, and the consequences of economic austerity policies should be investigated by crowds of researchers rather than left to single teams of analysts.

Under the current system, strong storylines win out over messy results. Worse, once a finding has been published in a journal, it becomes difficult to challenge. Ideas become entrenched too quickly, and uprooting them is more disruptive than it ought to be. The crowdsourcing approach gives space to dissenting opinions.

Researchers who are interested in starting or participating in collaborative crowdsourcing projects can access resources available online. We have publicly shared all our materials and survey templates, and the Center for Open Science has just launched ManyLab, a web space where researchers can join crowdsourced projects.

Summary of  this Nature article in this weeks Economist (Honest disagreement about methods may explain irreproducible results. From the Economist, p82, October 10th, 2015)

“IT SOUNDS like an easy question for any half-competent scientist to answer. Do dark-skinned footballers get given red cards more often than light-skinned ones? But, as Raphael Silberzahn of IESE, a Spanish business school, and Eric Uhlmann of INSEAD, an international one (he works in the branch in Singapore), illustrate in this week’s Nature, it is not. The answer depends on whom you ask, and the methods they use.

Dr Silberzahn and Dr Uhlmann sought their answers from 29 research teams. They gave their volunteers the same wodge of data (covering 2,000 male footballers for a single season in the top divisions of the leagues of England, France, Germany and Spain) and waited to see what would come back.

The consensus was that dark-skinned players were about 1.3 times more likely to be sent off than were their light-skinned confrères. But there was a lot of variation. Nine of the research teams found no significant relationship between a player’s skin colour and the likelihood of his receiving a red card. Of the 20 that did find a difference, two groups reported that dark-skinned players were less, rather than more, likely to receive red cards than their paler counterparts (only 89% as likely, to be precise). At the other extreme, another group claimed that dark-skinned players were nearly three times as likely to be sent off.

Dr Uhlmann and Dr Silberzahn are less interested in football than in the way science works. Their study may shed light on a problem that has quite a few scientists worried: the difficulty of reproducing many results published in journals.

Fraud, unconscious bias and the cherry-picking of data have all been blamed at one time or another—and all, no doubt, contribute. But Dr Uhlmann’s and Dr Silberzahn’s work offers another explanation: that even scrupulously honest scientists may disagree about how best to attack a data set. Their 29 volunteer teams used a variety of statistical models (“everything from Bayesian clustering to logistic regression and linear modelling”, since you ask) and made different decisions about which variables within the data set were deemed relevant. (Should a player’s playing position on the field be taken into account? Or the country he was playing in?) It was these decisions, the authors reckon, that explain why different teams came up with different results.

How to get around this is a puzzle. But when important questions are being considered—when science is informing government decisions, for instance—asking several different researchers to do the analysis, and then comparing their results, is probably a good idea.”

See also another summary of the Nature articel in: A Fix for Social Science, Francis Diep, Pacific Standard, 7th October

 

One thought on “Crowdsourced research: Many hands make tight work”

  1. It is quite true if one’s job goes through many eyes for a review, chances are getting a well refined piece of research work both in terms of the design and this expected results.

Comments?

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: