Quantitative and Qualitative Methods in Impact Evaluation and Measuring Results

Governance and Social Development Resource Centre. Issues Paper by Sabine Garbarino and Jeremy Holland March 2009

1 Introduction
There has been a renewed interest in impact evaluation in recent years amongst development agencies and donors. Additional attention was drawn to the issue recently by a Center for Global Development (CGD) report calling for more rigorous impact evaluations, where ‘rigorous’ was taken to mean studies which tackle the selection bias aspect of the attribution problem (CGD, 2006). This argument was not universally well received in the development community; among other reasons there was the mistaken belief that supporters of rigorous impact evaluations were pushing for an approach solely based on randomised control trials (RCTs). While ‘randomisers’ have appeared to gain the upper hand in a lot of the debates—particularly in the United States—the CGD report in fact recognises a range of approaches and the entity set up as a results of its efforts, 3ie, is moving even more strongly towards mixed methods (White, nd). The Department for International Development (DFID) in its draft policy statements similarly stresses the opportunities arising from a synthesis of qualitative and qualitative approaches in impact evaluation. Other work underway on ‘measuring results’ and ‘using numbers’ recognises the need to find standard indicators which capture non-material impacts and which are sensitive to social difference. This work also stresses the importance of supplementing standard indicators with narrative that can capture those dimensions of poverty that are harder to measure. This paper contributes to the ongoing debate on ‘more and better’ impact evaluations by highlighting experience on combining qualitative and quantitative methods for impact evaluation to ensure that we:

1. measure the different impact of donor interventions on different groups of people and

2. measure the different dimensions of poverty, particularly those that are not readily quantified but which poor people themselves identity as important, such as dignity, respect, security and power.

A third framing question was added during the discussions with DFID staff on the use of the research process itself as a way of increasing accountability and empowerment of the poor.

This paper does not intend to provide a detailed account of different approaches to impact evaluation nor an overview of proposed solutions to specific impact evaluation challenges. Instead it defines and reviews the case for combining qualitative and quantitative approaches to impact evaluation. An important principle that emerges in this discussion is that of equity, or what McGee (2003, 135) calls ‘equality of difference’. By promoting various forms of mixing we are moving methodological discussion away from a norm in development research in which qualitative research plays ‘second fiddle’ to conventional empiricist investigation. This means, for example, that contextual studies should not be used simply to confirm or ‘window dress’ the findings of non-contextual surveys. Instead they should play a more rigorous role of observing and evaluating impacts, even replacing, when appropriate, large-scale and lengthy surveys that can ‘overgenerate’ information in an untimely fashion for policy audiences.

The remainder of the paper is structured as follows. Section 2 briefly sets the scene by summarising the policy context. Section 3 clarifies the terminology surrounding qualitative and quantitative approaches, including participatory research. Section 4 reviews options for combining and sequencing qualitative and quantitative methods and data and looks at recent methodological innovations in measuring and analysing qualitative impacts. Section 5 addresses the operational issues to consider when combing methods in impact evaluation. Section 6 briefly concludes.

A CAN OF WORMS? IMPLICATIONS OF RIGOROUS IMPACT EVALUATIONS FOR DEVELOPMENT AGENCIES

Eric Roetman,  International Child Support,  Email: eric.roetman@ic s.nl

3ie Working Paper 11, March 2011 Found courtesy of  @txtpablo

Abstract
“Development agencies are under great pressure to show results and evaluate the impact of projects and programmes. This paper highlights the practical and ethical dilemmas of conducting impact evaluations for NGOs (Non Governmental Organizations). Specifically the paper presents the case of the development organization, International Child Support (ICS). For almost a decade, all of ICS’ projects in West Kenya were evaluated through rigorous, statistically sound, impact evaluations. However, as a result of logistical and ethical dilemmas ICS decided to put less emphasis on these evaluations. This particular case shows that rigorous impact evaluations are more than an additional step in the project cycle; impact evaluations influence every step of the programme and project design. These programmatic changes, which are needed to make rigorous impact evaluations possible, may go against the strategy and principles of many development agencies. Therefore, impact evaluations not only require additional resources but also present organizations with a dilemma if they are willing to change their approach and programmes.”

[RD comment: I think this abstract is somewhat misleading. My reading of the story in this paper is that ICS’s management made some questionable decisions, not that there was something intrinsically questionable about rigourous impact evaluations per se. In the first half of the story the ICS management allowed researchers, and their methodological needs, to drive ICS programming decisions, rather than to serve and inform programming decisions. In the second half of the story the evidence from some studies of the efficacy of particular forms of participatory development seems to have been overriden by the sheer strength of ICSs belief’s in the primacy of participatory approaches. Of course this would not be the first time that evidence has been sidelined, when an organisation’s core values and beliefs are threatened.]

Randomised controlled trials, mixed methods and policy influence in international development – Symposium

Thinking out of the black box. A 3ie-LIDC Symposium
Date: 17:30 to 19:30 Monday, May 23rd 2011
Venue: John Snow Lecture Theatre, London School of Hygiene and Tropical Medicine (LSHTM) Keppel Street, London, WC1E 7HT

Professor Nancy Cartwright, Professor of Philosophy, London School of Economics
Professor Howard White, Executive Director, 3ie
Chair: Professor Jeff Waage, Director, LIDC

Randomised  Controlled  Trials  (RCTs)  have  moved  to  the  forefront  of  the development  agenda  to  assess  development  results  and  the  impact  of development  programs.  In  words  of  Esther  Duflo  –  one  of  the  strongest advocates of RCTs – RCTs allow us to know which development efforts help and which cause harm.

But  RCTs  are  not  without  their  critics,  with  questions  raised  about  their usefulness, both  to provide more substantive  lessons about  the program being evaluated and whether the findings can be generalized to other settings.

This symposium brings perspectives from the philosophy of science, and a mixed method approach to impact analysis, to this debate.

ALL WELCOME
For more information contact: 3ieuk@3ieimpact.org

PS1: Nancy Cartwright wrote “Are RCTs the Gold Standard?” in 2007

PS2: The presentation by Howard White is now available here  – http://tinyurl.com/3dwlqwn but without audio