By Ole Dahl Rasmussen, University of Southern Denmark and DanChurchAid, Nikolaj Malchow-Møller, University of Southern Denmark, Thomas Barnebeck Andersen, University of Southern Denmark. April 2011 Available as pdf Found courtesy of @ithorpe
Abstract: Recent advances in the use of randomized control trials to evaluate the effect of development interventions promise to enhance our knowledge of what works and why. A core argument supporting randomised studies is the claim that they have high internal validity. We argue that this claim is weak as long as a trial registry of development interventions is not in place. Without a trial registry, the possibilities for data mining, created by analyses of multiple outcomes and subgroups, undermine the internal validity. Drawing on experience from evidence-based medicine and recent examples from microfinance, we argue that a trial registry would also enhance external validity and foster innovative research.
RD Comment: Well worth reading. The proposal and supporting argument is not only relevant to thinking about RCTs, but to all forms of impact evaluation. In fact, one could argue for similar registeries not only where new interventions are being tested, but also where interventions are being replicated or scaled up (where there also needs to be some accountability for, and analysis of, the results). The problem being addressed, perhaps not made clearly enough in the abstract, is pervasive bias towards publicising and publishing positive results, and the failure to acknowledge and use negative results. One quote is illustrative: “A recent review of evidence on microcredit found that all except one of the evaluations carried out by donor agencies and large NGOs showed positive and significant effects, suggesting that bias exists (Kovsted et al., 2009)”
Related to this issue of failure to identify and use negative results, see this blog posting on “Do we need a Minimum Level of Failure(MLF)?“
Dear Rick
Thanks for posting on our paper. I agree with you that a trial registry is a good idea for new as well as replicated interventions. In the paper, we actually argue that all surveys, whether from new or old interventions, RCTs or non-RCTs, would benefit from registering their outcomes and subgroups before initiating the survey. As such, a trial registry would simply allow us to distinguish between primary and secondary analysis of data. Today, all quantitative analysis in development can be considered of the latter kind, which unfortunately is prone to data mining. Significant relationships gets reported and published.
Kind regards,
Ole