“There has been enormous progress in impact evaluation of development interventions in the last five years. The 2006 CGD report When Will be Ever Learn? claimed that there was little rigorous evidence of what works in development. But there has been a huge surge in studies since then. By our count, there are over 800 completed and on-going impact evaluations of socio-economic development interventions in low and middle-income countries.
But this increase in numbers is just the start of the process of ‘improving lives through impact evaluation’, which was the sub-title of the CGD report and has become 3ie’s vision statement. Here are five major challenges facing the impact evaluation community:
1. Identify and strengthen processes to ensure that evidence is used in policy: studies are not an end in themselves, but a means to the end of better policy, programs and projects, and so better lives. At 3ie we are starting to document cases in which impact evaluations have, and have not, influenced policy to better understand how to go about this. DFID now requires evidence to be provided to justify providing support to new programs, an example which could be followed by other agencies.
2. Institutionalize impact evaluation: the development community is very prone to faddism. Impact evaluation could go the way of other fads and fall into disfavour. We need to demonstrate the usefulness of impact evaluation to help prevent this happening , hence my first point. But we also need take steps to institutionalize the use of evidence in governments and development agencies. This step includes ensuring that ‘results’ are measured by impact, not outcome monitoring.
3. Improve evaluation designs to answer policy-relevant questions: quality impact evaluations embed the counterfactual analysis of attribution in a broader analysis of the causal chain, allowing an understanding of why interventions work, or not, and yielding policy relevant messages for better design and implementation. There have been steps in this direction, but researchers need better understanding of the approach and to genuinely embrace mixed methods in a meaningful way.
4. Make progress with small n impact evaluations: we all accept that we should be issues-led not methods led, and use the most appropriate method for the evaluation questions at hand. But the fact is that there is far more consensus for the evaluation of large n interventions, in which experimental and quasi-experimental approaches can be used, then there is about the approach to be used for small n interventions. If the call to base development spending on evidence of what works is to be heeded, then the development evaluation community needs to move to consensus on this point.
5. Expand knowledge and use of systematic reviews: single impact studies will also be subject to criticisms of weak external validity. Systematic reviews, which draw together evidence from all quality impact studies of a particular intervention in a rigorous manner, give stronger, more reliable, messages. There has been an escalation in the production of systematic reviews in development in the last year. The challenge is to ensure that these studies are policy relevant and used by policy makers.”