Towards a Plurality of Methods in Project Evaluation: A Contextualised Approach to Understanding Impact Trajectories and Efficacy

Michael Woolcock, January 2009, BWPI Working Paper 73

“Understanding the efficacy of development projects requires not only a plausible counterfactual, but an appropriate match between the shape of impact trajectory over time and the deployment of a corresponding array of research tools capable of empirically discerning such a trajectory. At present, however, the development community knows very little, other than by implicit assumption, about the expected shape of the impact trajectory from any given sector or project type, and as such is prone to routinely making attribution errors. Randomisation per se does not solve this problem. The sources and manifestations of these problems are considered, along with some constructive suggestions for responding to them. ”

Michael Woolcock is Professor of Social Science and Development Policy, and Research Director of the Brooks World Poverty Institute, at the University of Manchester.

[RD Comment: Well worth reading, more than once]

PS: See also the more recent “Guest Post: Michael Woolcock on The Importance of Time and Trajectories in Understanding Project Effectiveness” on the Development Impact blog, 5th May 2011

“Instruments, Randomization and Learning about Development”

Angus Deaton,  Research Program in Development Studies, Center for Health and Wellbeing, Princeton University, March, 2010 Full text as pdf

There is currently much debate about the effectiveness of foreign aid and about what kind of projects can engender economic development. There is skepticism about the ability of econometric analysis to resolve these issues, or of development agencies to learn from their own experience. In response, there is increasing use in development economics of randomized controlled trials (RCTs) to accumulate credible knowledge of what works, without over-reliance on questionable theory or statistical methods. When RCTs are not possible, the proponents of these methods advocate quasi-randomization through instrumental variable (IV) techniques or natural experiments. I argue that many of these applications are unlikely to recover quantities that are useful for policy or understanding: two key issues are the misunderstanding of exogeneity, and the handling of heterogeneity. I illustrate from the literature on aid and growth. Actual randomization faces similar problems as does quasi-randomization, notwithstanding rhetoric to the contrary. I argue that experiments have no special ability to produce more credible knowledge than other methods, and that actual experiments are frequently subject to practical problems that undermine any claims to statistical or epistemic superiority. I illustrate using prominent experiments in development and elsewhere. As with IV methods, RCT-based evaluation of projects, without guidance from an understanding of underlying mechanisms, is unlikely to lead to scientific progress in the understanding of economic development. I welcome recent trends in development experimentation away from the evaluation of projects and towards the evaluation of theoretical mechanisms.

See also Why Works? by Lawrence Hadded, Development Horizons blog

See also  Carlos Baharona’s Randomised Control Trials for the Impact Evaluation of Development Initiatives: A Statistician’s Point of View. Introduction: This [ILAC Working Paper]  paper contains the technical and practical reflections of a statistician on the use of Randomised Control Trial designs (RCT) for evaluating the impact of development initiatives. It is divided into three parts. The first part discusses RCTs in impact evaluation, their origin, how they have developed and the debate that has been generated in the evaluation circles. The second part examines difficult issues faced in applying RCT designs to the impact evaluation of development initiatives, to what extent this type of design can be applied rigorously, the validity of the assumptions underlying RCT designs in this context, and the opportunities and constraints inherent in their adoption. The third part discusses the some of the ethical issues raised by RCTs, the need to establish ethical standards for studies about development options and the need for an open mind in the selection of research methods and tools.

%d bloggers like this: