3ie Public Lecture: What evidence based development has to learn from evidence based medicine? What we have learned from 3ie’s experience in evidence based development

Speaker: Chris Whitty, LSHTM & DFID
Speaker: Howard White, 3ie
Date and time: 15 April 2013, 5.30 – 7.00 pm
Venue: John Snow Lecture Theatre A&B, London School of Hygiene & Tropical Medicine, Keppel Street, London, UK

Evidence-based medicine has resulted in better medical practices saving hundreds of thousands of lives across the world. Can evidence-based development achieve the same? Critics argue that it cannot. Technical solutions cannot solve the political problems at the heart of development. Randomized control trials cannot unravel the complexity of development. And these technocratic approaches have resulted in a focus on what can be measured rather than what matters. From the vantage point of a medical practitioner with a key role in development research, Professor Chris Whitty will answer these critics, pointing out that many of the same objections were heard in the early days of evidence-based medicine. Health is also complex, a social issue as well as a technical one. So what are the lessons from evidence-based medicine for filling the evidence gap in development?

The last decade has seen a rapid growth in the production of impact evaluations. What do they tell us, and what do they not? Drawing on the experience of over 100 studies supported by the 3ie Professor Howard White presents some key findings about what works and what doesn’t, with examples of how evidence from impact evaluations is being used to improve lives. Better evaluations will lead to better evidence and so better policies. What are the strengths and weaknesses of impact evaluations as currently practiced, and how may they be improved?

Chris Whitty is a clinical epidemiologist and Chief Scientific Advisor and Director Research & Evidence Division, UK Department for International Development (DFID). He is professor of International Health at LSHTM and prior to DFID he was Director of the LSHTM Malaria Centre and on Board of various other organisations.

Howard White is the Executive Director of 3ie, co-chair of the Campbell International Development Coordinating Group, and Adjunct Professor, Alfred Deakin Research Institute, Geelong University. His previous experience includes leading the impact evaluation programme of the World Bank’s Independent Evaluation Group and before that, several multi-country evaluations.

Phil Davies is Head of the London office of 3ie. He has responsibilities for 3ie’s Systematic Reviews programme. Prior to 3ie he was the Executive Director of Oxford Evidentia, ahas also served as a senior civil servant in the UK Cabinet Office and HM Treasury, responsible for policy evaluation and analysis.

First come first serve. Doors open at 5:15 pm
More about 3ie: www.3ieimpact.org

What counts as good evidence?

by Sandra Nutley, Alison Powelland Huw Davies,  Research Unit for Research Utilisation (RURU), School of Management, University of St Andrews, www.ruru.ac.uk November 2012

Available as pdf. This is a paper for discussion. The authors would welcome comments, which should be emailed to smn@st-andrews.ac.uk or Jonathan.Breckon@nesta.org.uk

In brief

Making better use of evidence is essential if public services are to deliver more for less. Central to this challenge is the need for a clearer understanding about standards of evidence that can be applied to the research informing social policy.  This paper reviews the extent to which it is possible to reach a workable consensus on ways of identifying and labelling evidence. It does this b y exploring the efforts made to date and the debates that have ensued . Throughout, the focus is on evidence that is underpinned by research, rather than other sources of evidence such as expert opinion or stakeholder views .

After setting the scene, the review and arguments are presented in five main sections:

We begin by exploring practice recommendations: many bodies provide practice recommendations, but concerns remain as to what kinds of research evidence  can or  should underpin such labelling schemas.

T his leads us to examine hierarchies of evidence: study design has long been used as  a key marker for evidence quality, but such ‘hierarchies of evidence ’ raise many  issues and have remained contested. Extending the hierarchies so that they also consider the quality of study conduct or the use of underpinning theory have  enhanced their usefulness but have also exposed new fault – lines of debate.

More broadly, in beyond hierarchies, we recognise that hierarchies of evidence have  seen most use in addressing the evidence for  what works .  As a consequence,several  agencies and authors have developed more complex  matrix approaches for  identifying evidence quality in ways that are more closely linked to the wider range  of  policy o r practice questions being addressed.

Strong evidence, or just good enough? A further pragmatic twist is seen by the recognition that evaluative evidence is always under development. Thus it may be  more helpful to think of an ‘evidence journey’ from promising early findings to  substantive bodies of knowledge.

Finally, we turn to the uses and impacts of standards of evidence and endorsing  practices .  In this section we raise many questions as to the use, uptake and impacts of evidence labelling schemes, bu t are able to provide few definitive answers as the research here is very patchy.

We conclude that there is no simple answer to the question of what counts as good evidence. It depends on what we want to know, for what purposes, and in what contexts we en visage that evidence being used. Thus while there is  a  need to debate  standards of evidence we should be realistic about the extent to which such  standard – setting will shape complex , politicised,  decision – making by policy makers,  service managers  and local practitioners.

 

Evidence of the effectiveness of evidence?

Heart + Mind? Or Just Heart? Experiments in Aid Effectiveness (And a Contest!) by Dean Karlan 05/27/2011 | 4:00 pm Found courtesy of @poverty_action

RD comment: There is a killer assumption behind many of the efforts being made to measure aid effectiveness – that evidence of the effectiveness of specific aid interventions will make a difference. That is,  it will be used to develop better policies and practices. But, as far as I know, much less effort is being invested into testing this assumption, to find out when and where evidence works this way, or not. This is worrying, because anyone looking into how policies are actually made knows that it is often not a pretty picture.

That is why, contrary to my normal policy, I am publicising a blog posting. This posting is by Dean Karlan on an actual experiment that looks at the effect of providing evidence of an aid intervention (a specific form of micro-finance assistance) on the willingness of individual donors to make donations to the aid agency that is delivering the intervention. This relatively simple experiment is now underway.

Equally interesting is the fact that the author has launched, albeit on a very modest scale, a prediction market on the likely results of this experiment. Visitors to the blog are asked to make their predictions on the results of the experiment. When the results of the experiment are available Dean will identify and reward the most successful “bidder” (with two free copies of his new book More Than Good Intentions). Apart from the fun element involved, the use of a prediction maket will enable  Dean to identify to what extent his experiment has generated new knowledge [i.e. experiment results differ a lot from the average prediction], versus confirmed existing common knowledge [i.e. results = the average prediction]. That sort of thing does not happen very often.

So, I encourage you to visit Dean’s blog and participate. You do this by making your predictions using the Comment facility at the end of the blog (where you can also read other’s predictions already made, plus their comments).

Learning how to learn: eight lessons for impact evaluations that make a difference

ODI Background Notes, April 2011. Authors: Ben Ramalingam

This Background Note outlines key lessons on impact evaluations, utilisation-focused evaluations and evidence-based policy. While methodological pluralism is seen as the key to effective impact evaluation in development, the emphasis here is not methods per se. Instead, the focus is on the range of factors and issues that need to be considered for impact evaluations to be used in policy and practice – regardless of the method employed. This Note synthesises research by ODI, ALNAP, 3ie and others to outline eight key lessons for consideration by all of those with an interest in impact evaluation and aid effectiveness”.  8 pages

The 8 lessons:
Lesson 1:  Understand the key stakeholders
Lesson 2:  Adapt the incentives
Lesson 3:  Invest in capacities and skills
Lesson 4:  Define  impact  in ways  that  relate  to  the specific context
Lesson 5:  Develop the right blend of methodologies
Lesson 6:  Involve those who matter in the decisions that matter
Lesson 7:  Communicate effectively
Lesson 8:  Be persistent and lexible

See also Ben’s Thursday, April 14, 2011 blog posting: When will we learn how to learn?

[RD comments on this paper]

1.     The case for equal respect for different methodologies can be overstated. I feel this is the case when Ben argues that “First, it has been shown that the knowledge that results from any type of particular impact evaluation methodology is no more rigorous or widely applicable than the results from any other kind of methodology.”  While it is important that evaluation results affect subsequent policy and practice their adoption and use is not the only outcome measure for evaluations. We also want those evaluation results have some reliability and validity, that will stand the test of time and be generalisable to other settings with some confidence. An evaluation could affect policy and practice without necessarily being good quality , defined in terms of reliability and valdity.

  • Nevertheless, I like Ben’s caution about focusing too much on evaluations as outputs and the need to focus more on outcomes, the use and uptake of evaluations.

    2.     The section of Ben’s paper that most attracted my interest was the story about the Joint Evaluation of Emergency Assistance to Rwanda, and how the evaluation team managed to ensure it became “one of the most influential evaluations in the aid sector”. We need more case studies of these kinds of events and then a systematic review of those case studies.

    3.     When I read statements various like this: “As well as a supply of credible evidence, effort needs to be made to understand the demand for evidence” I have an image in my mind of evaluators as humble supplicants, at the doorsteps of the high and mighty. Isn’t it about time that evaluators turned around and started demanding that policy makers disclose the evidence base of their existing policies? As I am sure has been said by others before, when you look around there does not seem to be much evidence of evidence based policy making. Norms and expectations need to be built up, and then there may be more interest in what evaluations have to say. A more assertive and questioning posture is needed.

    Behavioral economics and randomized trials: trumpeted, attacked and parried

    This is the title of a blog posting by Chris Blattman, which points to and comments on a debate  in the Boston Review, March/April 2011

    The focus of the debate is an article by Rachel Glennerster and Michael Kremer, titled Small Changes, Big Results:  Behavioral Economics at Work in Poor Countries

    “Behavioral economics has changed the way we implement public policy in the developed world. It is time we harness its approaches to alleviate poverty in developing countries as well.”

    This article is part of Small Changes, Big Results, a forum on applying behavioral economics to global development. This includes the following 7 responses to Glennerster and  Kremer, and their response.

    Diane Coyle: There’s nothing irrational about rising prices and falling demand. (March 14)

    Eran Bendavid: Randomized trials are not infallible—just look at medicine. (March 15)

    Pranab Bardhan: As the experimental program becomes its own kind of fad, other issues in development are being ignored. (March 16)

    José Gómez-Márquez: We want to empower locals to invent, so they can be collaborators, not just clients. (March 17)

    Chloe O’Gara:  You can’t teach a child to read with an immunization schedule. (March 17)

    Jishnu Das, Shantayanan Devarajan, and Jeffrey S. Hammer:Even if experiments show us what to do, can we rely on government action? (March 18)

    Daniel N. Posner: We cannot hope to understand individual behavior apart from the community itself. (March 21)

    Rachel Glennerster and Michael Kremer reply: Context is important, and meticulous experimentation can improve our understanding of it. (March 22)

    PS (26th March 2011: See also Ben Goldacre’s Bad Science column in today’s Guardian: Unlikely boost for clinical trials (/When ethics committees kill)

    “At present there is a bizarre paradox in medicine. When there is no evidence on which treatment is best, out of two available options, then you can choose one randomly at will, on a whim, in clinic, and be subject to no special safeguards. If, however, you decide to formally randomise in the same situation, and so generate new knowledge to improve treatments now and in the future, then suddenly a world of administrative obstruction opens up before you.

    This is not an abstract problem. Here is one example. For years in A&E, patients with serious head injury were often treated with steroids, in the reasonable belief that this would reduce swelling, and so reduce crushing damage to the brain, inside the fixed-volume box of your skull.

    Researchers wanted to randomise unconscious patients to receive steroids, or no steroids, instantly in A&E, to find out which was best. This was called the CRASH trial, and it was a famously hard fought battle with ethics committees, even though both treatments – steroids, or no steroids – were in widespread, routine use. Finally, when approval was granted, it turned out that steroids were killing patients.”

    Five challenges facing impact evaluation

    PS 2018 02 23: The original NONIE Meeting 2001 website is no longer in existence. Use this reference, if needed: White, H. (2011) ‘Five challenges facing impact evaluation on NONIE’ (http://nonie2011.org/?q=content/post-2).

    “There has been enormous progress in impact evaluation of development interventions in the last five years. The 2006 CGD report When Will be Ever Learn? claimed that there was little rigorous evidence of what works in development. But there has been a huge surge in studies since then. By our count, there are over 800 completed and on-going impact evaluations of socio-economic development interventions in low and middle-income countries.

    But this increase in numbers is just the start of the process of ‘improving lives through impact evaluation’, which was the sub-title of the CGD report and has become 3ie’s vision statement. Here are five major challenges facing the impact evaluation community:

    1. Identify and strengthen processes to ensure that evidence is used in policy: studies are not an end in themselves, but a means to the end of better policy, programs and projects, and so better lives. At 3ie we are starting to document cases in which impact evaluations have, and have not, influenced policy to better understand how to go about this. DFID now requires evidence to be provided to justify providing support to new programs, an example which could be followed by other agencies.

    2. Institutionalize impact evaluation: the development community is very prone to faddism. Impact evaluation could go the way of other fads and fall into disfavour. We need to demonstrate the usefulness of impact evaluation to help prevent this happening , hence my first point. But we also need take steps to institutionalize the use of evidence in governments and development agencies. This step includes ensuring that ‘results’ are measured by impact, not outcome monitoring.

    3. Improve evaluation designs to answer policy-relevant questions: quality impact evaluations embed the counterfactual analysis of attribution in a broader analysis of the causal chain, allowing an understanding of why interventions work, or not, and yielding policy relevant messages for better design and implementation. There have been steps in this direction, but researchers need better understanding of the approach and to genuinely embrace mixed methods in a meaningful way.

    4. Make progress with small n impact evaluations: we all accept that we should be issues-led not methods led, and use the most appropriate method for the evaluation questions at hand. But the fact is that there is far more consensus for the evaluation of large n interventions, in which experimental and quasi-experimental approaches can be used, then there is about the approach to be used for small n interventions. If the call to base development spending on evidence of what works is to be heeded, then the development evaluation community needs to move to consensus on this point.

    5. Expand knowledge and use of systematic reviews: single impact studies will also be subject to criticisms of weak external validity. Systematic reviews, which draw together evidence from all quality impact studies of a particular intervention in a rigorous manner, give stronger, more reliable, messages. There has been an escalation in the production of systematic reviews in development in the last year. The challenge is to ensure that these studies are policy relevant and used by policy makers.”

    Learners, practitioners and teachers Handbook on monitoring, evaluating and managing knowledge for policy inluence

    Authors: Vanesa Weyrauch, Julia D´Agostino, Clara Richards
    Date Published: 11 February 2011 By CIPPEC. Available as pdf

    Description: The evidence based policy influence is a topic of growing interest to researchers, social organizations, experts, government officials, policy research institutes and universities. However, they all admit that the path from the production of a piece or body of research until a public policy is sinuous, fuzzy, forked. In this context, it is not surprising that the practice of monitoring and evaluation (M&E) of the policy influence in Latin America is limited. And, indeed, a limited development of knowledge management (KM) on the experiences of advocacy organizations in the region is also observed. Incorporate monitoring, evaluating, and managing of knowledge between the daily practices of policy research institutes is well worth it. On the one hand, the use of these tools can be a smart strategy to enhance the impact of their research in public policy. On the other hand, can help them strengthen their reputation and visibility attracting more and better support by donors. In turn, the design of a system of M&E and the beginning of a KM culture, if approached with a genuine interest in learning, can become a valuable knowledge that bridges motivation for members of the organization. In short, these practices can improve targeting activities, better decide where and how to invest resources, and formulate more realistic and accurate strategic plans. With the publication of this handbook CIPPEC aims to support organizations that can monitor and evaluate their interventions and to develop systematic strategies for knowledge management. It includes stories of previous experiences in these fields in the region of Latin America, reflections on the most common challenges and opportunities and concrete working tools. These contributions aim to pave the way for the influence of public policy research in the region.

    Nature Editorial: To ensure their results are reproducible, analysts should show their workings.

    See Devil in the Details, Nature, Volume:470, Pages: 305–306 , 17 February 2011.

    How many aid agencies could do the same, when their projects manage to deliver good results? Are there lessons to learned here?

    Article text:

    As analysis of huge data sets with computers becomes an integral tool of research, how should researchers document and report their use of software? This question was brought to the fore when the release of e-mails stolen from climate scientists at the University of East Anglia in Norwich, UK, generated a media fuss in 2009, and has been widely discussed, including in this journal. The issue lies at the heart of scientific endeavour: how detailed an information trail should researchers leave so that others can reproduce their findings?

    The question is perhaps most pressing in the field of genomics and sequence analysis. As biologists process larger and more complex data sets and publish only the results, some argue that the reporting of how those data were analysed is often insufficient. Continue reading “Nature Editorial: To ensure their results are reproducible, analysts should show their workings.”

    Impact Evaluation Conference: “Mind the Gap”: From Evidence to Impact

    Date: June 15-17 2011
    Venue: Cuernavaca, Mexico

    Each year billions of dollars are spent on tackling global poverty. Development programs and policies are designed to build sustainable livelihoods and improve lives. But is there real evidence to show which programs work and why? Are government and donor policies based on concrete and credible evidence?

    The Mind the Gap conference on impact evaluation will address these questions and offer possible solutions. With a focus on Latin American Countries the conference will take place in Cuernavaca, Mexico, June 15-17, 2011. Co-hosted by The International Initiative for Impact Evaluation (3ie), the National Institute of Public Health of Mexico (INSP), the Inter-American Development Bank (IADB) and the Center for Labor and Social Distributive Studies in coordination with the Impact Evaluation Network and the Poverty and Economic Policy Network (CEDLAS-IEN-PEP).

    This conference will provide a platform to share and discuss experiences on how to best achieve evidence-based policy in sectors that are highly relevant for Latin America. To this end, the conference will mainstream a policy-focus into all its activities. The plenary sessions will address the challenges and progress made in building evidence into policy-making processes. The sector-focused sessions will be asked to address the engagement of stakeholders and policy-makers in the various studies presented. The conference will be preceded by a range of pre-conference clinics tailored to the interests and needs of both researchers and program managers.

    The conference will accommodate only 400 attendees. The official languages of the Conference are Spanish and English. Simultaneous translation will be provided for all conference sessions.Please register early to secure your attendance. Registration will open March 1st. 2011. Early bird rates will be offered.

    Check the conference website often for up to date conference information.  http://www.impactevaluation2011.org/

    Bursaries are being made available to developing country participants with a proven interest in impact evaluation.

    Bursary applications will open March 1st giving preference to authors of accepted abstracts.

    %d bloggers like this: