3ie Public Lecture: What evidence based development has to learn from evidence based medicine? What we have learned from 3ie’s experience in evidence based development

Speaker: Chris Whitty, LSHTM & DFID
Speaker: Howard White, 3ie
Date and time: 15 April 2013, 5.30 – 7.00 pm
Venue: John Snow Lecture Theatre A&B, London School of Hygiene & Tropical Medicine, Keppel Street, London, UK

Evidence-based medicine has resulted in better medical practices saving hundreds of thousands of lives across the world. Can evidence-based development achieve the same? Critics argue that it cannot. Technical solutions cannot solve the political problems at the heart of development. Randomized control trials cannot unravel the complexity of development. And these technocratic approaches have resulted in a focus on what can be measured rather than what matters. From the vantage point of a medical practitioner with a key role in development research, Professor Chris Whitty will answer these critics, pointing out that many of the same objections were heard in the early days of evidence-based medicine. Health is also complex, a social issue as well as a technical one. So what are the lessons from evidence-based medicine for filling the evidence gap in development?

The last decade has seen a rapid growth in the production of impact evaluations. What do they tell us, and what do they not? Drawing on the experience of over 100 studies supported by the 3ie Professor Howard White presents some key findings about what works and what doesn’t, with examples of how evidence from impact evaluations is being used to improve lives. Better evaluations will lead to better evidence and so better policies. What are the strengths and weaknesses of impact evaluations as currently practiced, and how may they be improved?

Chris Whitty is a clinical epidemiologist and Chief Scientific Advisor and Director Research & Evidence Division, UK Department for International Development (DFID). He is professor of International Health at LSHTM and prior to DFID he was Director of the LSHTM Malaria Centre and on Board of various other organisations.

Howard White is the Executive Director of 3ie, co-chair of the Campbell International Development Coordinating Group, and Adjunct Professor, Alfred Deakin Research Institute, Geelong University. His previous experience includes leading the impact evaluation programme of the World Bank’s Independent Evaluation Group and before that, several multi-country evaluations.

Phil Davies is Head of the London office of 3ie. He has responsibilities for 3ie’s Systematic Reviews programme. Prior to 3ie he was the Executive Director of Oxford Evidentia, ahas also served as a senior civil servant in the UK Cabinet Office and HM Treasury, responsible for policy evaluation and analysis.

First come first serve. Doors open at 5:15 pm
More about 3ie: www.3ieimpact.org

Special Issue on Systematic Reviews – J. of Development Effectiveness

Volume 4, Issue 3, 2012

  • Why do we care about evidence synthesis? An introduction to the special issue on systematic reviews
  • How to do a good systematic review of effects in international development: a tool kit
    • Hugh Waddington, Howard White, Birte Snilstveit, Jorge Garcia Hombrados, Martina Vojtkova, Philip Davies, Ami Bhavsar, John Eyers, Tracey Perez Koehlmoos, Mark Petticrew, Jeffrey C. Valentine & Peter Tugwell  pages 359-387Download full text
  • Systematic reviews: from ‘bare bones’ reviews to policy relevance
  • Narrative approaches to systematic review and synthesis of evidence for international development policy and practice
  • Purity or pragmatism? Reflecting on the use of systematic review methodology in development
  • The benefits and challenges of using systematic reviews in international development research
    • Richard Mallett, Jessica Hagen-Zanker, Rachel Slater & Maren Duvendack pages 445-455 Download full text
  • Assessing ‘what works’ in international development: meta-analysis for sophisticated dummies
    • Maren Duvendack, Jorge Garcia Hombrados, Richard Palmer-Jones & Hugh Waddington pages 456-471Download full text
  • The impact of daycare programmes on child health, nutrition and development in developing countries: a systematic review

Addressing attribution of cause and effect in small n impact evaluations: towards an integrated framework

Howard White and Daniel Phillips,  International Initiative for Impact Evaluation, Working Paper 15, May 2012 Available as MSWord doc


With the results agenda in the ascendancy in the development community, there is an increasing need to demonstrate that development spending makes a difference, that it has an impact. This requirement to demonstrate results has fuelled an increase in the demand for, and production of, impact evaluations. There exists considerable consensus among impact evaluators conducting large n impact evaluations involving tests of statistical difference in outcomes between the treatment group and a properly constructed comparison group. However, no such consensus exists when it comes to assessing attribution in small n cases, i.e. when there are too few units of assignment to permit tests of statistical difference in outcomes between the treatment group and a properly constructed comparison group.

We examine various evaluation approaches that could potentially be suitable for small n analysis and find that a number of them share a methodological core which could provide a basis for consensus. This common core involves the specification of a theory of change together with a number of further alternative causal hypotheses. Causation is established beyond reasonable doubt by collecting evidence to validate, invalidate, or revise the hypothesised explanations, with the goal of rigorously evidencing the links in the actual causal chain.

We argue that, properly applied, approaches which undertake these steps can be used to address attribution of cause and effect. However, we also find that more needs to be done to ensure that small n evaluations minimise the biases which are likely to arise from the collection, analysis and reporting of qualitative data. Drawing on insights from the field of cognitive psychology, we argue that there is scope for considerable bias, both in the way in which respondents report causal relationships, and in the way in which evaluators gather and present data; this points to the need to incorporate explicit and systematic approaches to qualitative data collection and analysis as part of any small n evaluation.



by Bertha Briceño, Water and Sanitation Program, World Bank; Laura Cuesta, University of Wisconsin-Madison, Orazio Attanasio, University College London
December 2011, 3ie Working Paper 14, available as pdf

“Abstract: As more resources are being allocated to impact evaluation of development programs,the need to map out the utilization and influence of evaluations has been increasingly highlighted. This paper aims at filling this gap by describing and discussing experiences from four large impact evaluations in Colombia on a case study-basis. On the basis of (1) learning from our prior experience in both managing and conducting impact evaluations, (2) desk review of available documentation from the Monitoring & Evaluation system, and (3) structured interviews with government actors, evaluators and program managers, we benchmark each evaluation against eleven standards of quality. From this benchmarking exercise, we derive five key lessons for conducting high quality and influential impact evaluations: (1) investing in the preparation of good terms of reference and identification of evaluation questions; (2) choosing the best methodological approach to address the evaluation questions; (3) adopting mechanisms to ensure evaluation quality; (4) laying out the incentives for involved parties in order to foster evaluation buy-in; and (5) carrying out a plan for quality dissemination.”

3ie’s proposes a Commitment to Evaluation Indicator (c2e)

International Initiative for Impact Evaluation (3ie) -Terms of Reference for a Research Consultancy– White paper for the Commitment to Evaluation Indicator

“Background: Experience to date shows that the use of evidence by donors and governments when designing and adopting development programmes remains sporadic. There are many examples where a programme was shown to have no impact but was expanded, as well as examples of programmes with positive impact being terminated. To promote better use of evaluation evidence in policy making and programme design, 3ie is launching a Commitment to Evaluation (c2e) indicator. The indicator will provide a measurement of government and donor agency use of evaluation evidence allowing for recognition and reward for progress and good practice. The indicator will be developed and piloted in 2012 for donor agencies with the intent to recognize donors that make systematic use of evidence and thus motivate others to do the same.

3ie’s initiative follows the example of other successful efforts to use awards or indexes to focus the attention of policymakers. Indexes such as the UN Development Programme’s Human Development index, Transparency International’s Corruption Perception index, and the Centre for Global Development’s Quality of ODA (QuODA) index have raised awareness on key issues and influenced practice of governments and development agencies. The Mexican National Council for the Evaluation of Social Development Policy (CONEVAL) annual award for good practices in social evaluation has strengthened political buy-in and commitment to evaluation in Mexico. In developing this c2e indicator, 3ie will draw from the lessons learned by similar initiatives on how best to motivate and award evaluation practices and build and run an effective cross-agency and cross-country indicator. More detailed background information on the rationale and theory of change behind the project is available in the discussion note in the annex.” See ToRs for rest of the text including annex.


Eric Roetman,  International Child Support,  Email: eric.roetman@ic s.nl

3ie Working Paper 11, March 2011 Found courtesy of  @txtpablo

“Development agencies are under great pressure to show results and evaluate the impact of projects and programmes. This paper highlights the practical and ethical dilemmas of conducting impact evaluations for NGOs (Non Governmental Organizations). Specifically the paper presents the case of the development organization, International Child Support (ICS). For almost a decade, all of ICS’ projects in West Kenya were evaluated through rigorous, statistically sound, impact evaluations. However, as a result of logistical and ethical dilemmas ICS decided to put less emphasis on these evaluations. This particular case shows that rigorous impact evaluations are more than an additional step in the project cycle; impact evaluations influence every step of the programme and project design. These programmatic changes, which are needed to make rigorous impact evaluations possible, may go against the strategy and principles of many development agencies. Therefore, impact evaluations not only require additional resources but also present organizations with a dilemma if they are willing to change their approach and programmes.”

[RD comment: I think this abstract is somewhat misleading. My reading of the story in this paper is that ICS’s management made some questionable decisions, not that there was something intrinsically questionable about rigourous impact evaluations per se. In the first half of the story the ICS management allowed researchers, and their methodological needs, to drive ICS programming decisions, rather than to serve and inform programming decisions. In the second half of the story the evidence from some studies of the efficacy of particular forms of participatory development seems to have been overriden by the sheer strength of ICSs belief’s in the primacy of participatory approaches. Of course this would not be the first time that evidence has been sidelined, when an organisation’s core values and beliefs are threatened.]

Randomised controlled trials, mixed methods and policy influence in international development – Symposium

Thinking out of the black box. A 3ie-LIDC Symposium
Date: 17:30 to 19:30 Monday, May 23rd 2011
Venue: John Snow Lecture Theatre, London School of Hygiene and Tropical Medicine (LSHTM) Keppel Street, London, WC1E 7HT

Professor Nancy Cartwright, Professor of Philosophy, London School of Economics
Professor Howard White, Executive Director, 3ie
Chair: Professor Jeff Waage, Director, LIDC

Randomised  Controlled  Trials  (RCTs)  have  moved  to  the  forefront  of  the development  agenda  to  assess  development  results  and  the  impact  of development  programs.  In  words  of  Esther  Duflo  –  one  of  the  strongest advocates of RCTs – RCTs allow us to know which development efforts help and which cause harm.

But  RCTs  are  not  without  their  critics,  with  questions  raised  about  their usefulness, both  to provide more substantive  lessons about  the program being evaluated and whether the findings can be generalized to other settings.

This symposium brings perspectives from the philosophy of science, and a mixed method approach to impact analysis, to this debate.

For more information contact: 3ieuk@3ieimpact.org

PS1: Nancy Cartwright wrote “Are RCTs the Gold Standard?” in 2007

PS2: The presentation by Howard White is now available here  – http://tinyurl.com/3dwlqwn but without audio

Sound expectations: from impact evaluations to policy change

3ie Working paper # 12, 2011, by Center for the Implementation of Public Policies Promoting Equity and Growth (CIPPEC) Emails: vweyrauch@cippec.org, gdiazlangou@cippec.org


“This paper outlines a comprehensive and flexible analytical conceptual framework to be used in the production of a case study series. The cases are expected to identify factors that help or hinder rigorous impact evaluations (IEs) from influenc ing policy and improving policy effectiveness. This framework has been developed to be adaptable to the reality of developing countries. It is aimed as an analytical-methodological tool which should enable researchers in producing case studies which identify factors that affect and explain impact evaluations’ policy influence potential. The approach should also enable comparison between cases and regions to draw lessons that are relevant beyond the cases themselves.

There are two different , though interconnected, issues that must be dealt with while discussing the policy influence of impact evaluations. The first issue has to do with the type of policy influence pursued and, aligned with this, the determination of the accomplishment (or not) of the intended influence. In this paper, we first introduce the discussion regarding the different types of policy influence objectives that impact evaluations usually pursue, which will ultimately help determine whether policy influence was indeed achieved. This discussion is mainly centered around whether an impact evaluation has had impact on policy. The second issue is related to the identification of the factors and forces that mediate the policy influence efforts and is focused on why the influence was achieved or not. We have identified and systematized the mediating factors and forces, and we approach them in this paper from the demand and supply perspective, considering as well, the intersection between these two.

The paper concludes that, ultimately, the fulfillment of policy change based on the results of impact evaluations is determined by the interplay of the policy influenc e objectives with the factors that affect the supply and demand of research in the policymaking process.

The paper is divided in four sections. A brief introduction is followed by an analysis of policy influence as an objective of research, specifically, impact evaluations. The third section identifies factors and forces that enhance or undermine influence in public policy decision making. The research ends up pointing out the importance of measuring policy influence and enumerates a series of challenges that have to be further assessed.”

AusAID-DFID-3ie call for Systematic Reviews

The Australian Agency for International Development (AusAID), the UK’s Department for International Development (DFID) and the International Initiative for Impact Evaluation (3ie) have just launched a joint call for proposals for systematic reviews to strengthen the international community’s capacity for evidence-based policy making. AusAID, DFID and 3ie have identified around 59 priority systematic review questions across several themes: education; health; social protection and social inclusion; governance, fragile states, conflict and disasters; environment; infrastructure and technology; agriculture and rural development; economic development; and aid delivery and effectiveness.

Systematic reviews examine the existing evidence on a particular intervention or program in low and middle income countries, drawing also on evidence from developed countries when pertinent. The studies should be carried out according to recognized international standards and guidelines. All studies will be subject to an external review process and for this purpose teams will be encouraged to register for peer review with a relevant systematic review coordinating body.

Applications have to be submitted using 3ie’s online application system. Deadline for submission of applications is 9am GMT on Monday, November 29, 2010.

For information on how to apply, guidance documents and the call for proposals, go to http://www.3ieimpact.org/systematicreviews/3ie-ausaid-dfid.php

%d bloggers like this: