Impact evaluation of natural resource management research programs: a broader view

 

by  John Mayne and Elliot Stern
ACIAR IMPACT ASSESSMENT SERIES 84, 2013
Available as pdf

Foreward

Natural resource management research (NRMR) has a key role in improving food security and reducing poverty and malnutrition. NRMR programs seek to modify natural systems in a sustainable way in order to benefit the lives of those who live and work within these natural systems—especially in rural communities in the developing world.

Evaluating the effectiveness of NRMR through the usual avenues of impact evaluation has posed distinct challenges. Many impact assessments focus on estimating net economic benefits from a project or program, and often are aimed at providing evidence to investors that their funds have been well spent. They have tended to focus on a specific causal evaluation issue: to what extent can a specific (net) impact be attributed to the intervention?

While many evaluations of NRMR programs and their projects will continue to use an impact assessment perspective, this report lays out a complementary approach to NRMR program evaluation. The approach focuses more on helping NRMR managers and stakeholders to learn about their interventions and to understand why and how outcomes and impacts have been realised (or, in some cases, have not). Thus, a key aim here is to position NRMR impact evaluation as a learning process undertaken to improve the delivery and effectiveness of NRMR programs by developing a new framework for thinking about and designing useful and practical evaluations.

The emphasis on learning follows from the view of NRMR as operating under dynamic, emergent, complex and often unpredictable human and ecological conditions. In such a setting, adaptive management informed by careful responses to new information and understanding is essential for building and managing more-effective programs and interventions. This is highlighted by examining some specific examples: the CGIAR Research Program on Aquatic Agricultural Systems (led by Worldfish), CGIAR’s Ganges Basin Development Challenge, and CSIRO–AusAID’s African Food Security Initiative.

The alternative approach presented here is another tool to use in the search for understanding of how and why impacts occur in a research, development and extension environment. We hope that the learning-orientated evaluation described will help elucidate more soundly based explanations that will guide researchers in replicating, scaling up and improving future programs.

Impact Evaluation Toolkit: Measuring the Impact of Results Based Financing on Maternal and Child Health

Christel Vermeersch, Elisa Rothenbühler, Jennifer Renee Sturdy, for the World Bank
Version 1.0. June 2012

Download full document: English [PDF, 3.83MB] / Español [PDF, 3.47MB] / Francais [PDF, 3.97MB]

View online: http://www.worldbank.org/health/impactevaluationtoolkit

“The Toolkit was developed with funding from the Health Results Innovation Trust Fund (HRITF). The objective of  the HRITF is to design, implement and evaluate sustainable results-based financing (RBF) pilot programs that improve maternal and child health outcomes for accelerating progress towards reaching MDGs 1c, 4 & 5. A key element of this program is to ensure a rigorous and well designed impact evaluation is embedded in each country’s RBF project in order to document the extent to which RBF programs are effective, operationally feasible, and under what circumstances. The evaluations are essential for generating new evidence that can inform and improve RBF, not only in the HRITF pilot countries, but also elsewhere. The HRITF finances grants for countries implementing RBF pilots, knowledge and learning activities, impact evaluations, as well as analytical work. ”

Livestreaming of the Impact, Innovation & Learning conference, 26-27 March 2013

(via Xceval)

Dear Friends
You may be interested in following next week’s Impact, Innovation and Learning conference, whose principle panel sessions are being live-streamed. Keynote speakers and panellists include:
  • Bob Bob Picciotto (King’s College, UKES, EES), Elliot Stern, Editor of ‘Evaluation’, Bruno Marchal (Institute of Tropical Medicine, Antwerp), John Grove (Gates Foundation), Ben Ramalingan (ODI) ,Aaron Zazueta (GEF),Peter Loewe (UNIDO), Martin Reynolds (Open University),Bob Williams (Bob Williams), Richard Hummelbrunner (OAR), Patricia Rogers (Royal Melbourne Inst of Technology), Barbara Befani (IDS, EES), Laura Camfield and Richard Palmer-Jones (University of East Anglia), Chris Barnett (ITAD/IDS), Giel Ton (University of Wagenigen) ,John Mayne, Jos Vaessen (UNESCO), Oscar Garcia (UNDP), Lina Payne (DFID), Marie Gaarder (World Bank), Colin Kirk (UNICEF), Ole Winckler Andersen (DANIDA)

Impact, Innovation and Learning – live-streamed event, 26-27 March 2013

Current approaches to the evaluation of development impact represent only a fraction of the research methods used in political science, sociology, psychology and other social sciences. For example, systems thinking and complexity science, causal inference models not limited to counterfactual analysis, and mixed approaches with blurred ‘quali-quanti’ boundaries, have all shown potential for application in development settings. Alongside this, evaluation research could be more explicit about its values and its learning potential for a wider range of stakeholders. Consequently, a key challenge in evaluating development impact is mastering a broad range of approaches, models and methods that produce evidence of performance in a variety of interventions in a range of different settings.
The aim this event, which will see the launch of the new Centre for Development Impact (www.ids.ac.uk/cdi), is to shape a future agenda for research and practice in the evaluation of development impact. While this is an invitation-only event, we will be live-streaming the main presentations from the plenary sessions and panel discussions. If you would like to register watch any of these sessions online, please contact Tamlyn Munslow in the first instance at t.munslow@ids.ac.uk.
More information at:
http://www.ids.ac.uk/events/impact-innovation-and-learning-towards-a-research-and-practice-agenda-for-the-future If you are unable to watch the live-streamed events, there will be an Watch Again option, after the conference.
With best wishes,
Emilie Wilson
Communications Officer
Institute of Development Studies

Rick Davies comment 28 March 2013: Videos of 9 presentations and panels are now available online at http://www.ustream.tv/recorded/30426381

Impact Evaluation: A Discussion Paper for AusAID Practitioners

“There are diverse views about what impact evaluations are and how they should be conducted. It is not always easy to identify and understand good approaches to impact evaluation for various development situations. This may limit the value that AusAID can obtain from impact evaluation.

This discussion paper aims to support appropriate and effective use of impact evaluations in AusAID by providing AusAID staff with information on impact evaluation. It provides staff who commission impact evaluations with a definition, guidance and minimum standards.

This paper, while authored by ODE, is an initiative of AusAID’s Impact Evaluation Working Group. The working group was formed by a sub-group of the Performance and Quality Network in 2011 to provide better coordination and oversight of impact evaluation in AusAID.”

ODE welcomes feedback on this discussion paper at ODE@ausaid.gov.au

DFID’s Approach to Impact Evaluation – Part I

[from Development Impact: News, views, methods, and insights from the world of impact evaluation.  Click here https://blogs.worldbank.org/impactevaluations/node/838 to view full story.
As part of a new series looking how institutions are approaching impact evaluation, DI virtually sat down with Nick York, Head of Evaluation and Gail Marzetti, Deputy Head, Research and Evidence Division
Development Impact (DI): There has been an increasing interest in impact evaluation (defined as experimental/quasi-experimental analysis of program effects) in DFID. Going forward, what do you see as impact evaluation’s role in how DFID evaluates what it does? How do you see the use of impact evaluation relative to other methods?  
Nick YorkThe UK has been at the forefront among European countries in promoting the use of impact evaluation in international development and it is now a very significant part of what we do – driven by the need to make sure our decisions and those of our partners are based on rigorous evidence.   We are building in prospective evaluation into many of our larger and more innovative operational programmes – we have quite a number of impact evaluations underway or planned commissioned from our country and operational teams. We also support international initiatives including 3ie where the UK was a founder member and a major funder, the Strategic Impact Evaluation Fund with the World Bank on human development interventions and NONIE, the network which brings together developing country experts on evaluation to share experiences on impact evaluation with professionals in the UN, bilateral and multilateral donors.
DI: Given the cost of impact evaluation, how do you choose which projects are (impact) evaluated?
NY:  We focus on those which are most innovative – where the evidence base is considered to be weak and needs to be improved – and those which are large or particularly risky. Personally, I think the costs of impact evaluation are relatively low compared to the benefits they can generate, or compared to the costs of running programmes using interventions which are untested or don’t work.   I also believe that rigorous impact evaluations generate an output – high quality evidence – which is a public good so although the costs to the commissioning organization can be high they represent excellent value for money for the international community. This is why 3ie, which shares those costs among several organisations, is a powerful concept.

LINKING MONITORING AND EVALUATION TO IMPACT EVALUATION

Burt Perrin, Impact Evaluation Notes  No. 2. April 2012 Rockefeller Foundation and Interaction. Available as pdf

Summary

This is the second guidance note in a four-part series of notes related to impact evaluation developed by InterAction with financial support from the Rockefeller Foundation.This second guidance note, Linking Monitoring and Evaluation to Impact Evaluation, illustrates the relationship between routine M&E and impact evaluation – in particular, how both monitoring and evaluation activities can support meaningful and valid impact evaluation. M&E has a critical role to play in impact evaluation, such as: identifying when and under what circumstances it would be possible and appropriate to undertake an impact evaluation; contributing essential data to conduct an impact evaluation, such as baseline data of various forms and information about the nature of the intervention; and contributing necessary information to interpret and apply findings from impact evaluation.

Contents
Introduction 1
1. How can monitoring and other forms of evaluation support impact evaluation?
1.1. Main characteristics of monitoring, evaluation, and impact evaluation
1.2. How M&E can contribute to impact evaluation
2. How to build impact evaluation into M&E thinking and practices
2.1. Articulate the theory of change
2.2. Identify priorities for undertaking impact evaluation
2.3. Identify information/data needs
2.4. Start with what you have
2.5. Design and implement the impact evaluation, analyze and interpret the findings
2.6. Use the findings
2.7. Review, reflect, and update
3. Engaging all parts of the organization
3.1. M&E: A core management function requiring senior management leadership and support
3.2. An active role for program staff is required
Summary
References and Other Useful Resources
Annex 1 – Contribution analysis

 

The Impact of Economics Blogs

David McKenzie (World Bank, BREAD, CEPR and IZA) and Berk Özler (World Bank). Policy Research Working Paper 5783. August 2011. Available as pdf. See also the authors’ blog about this paper.

Introduction: Practically nonexistent a decade ago, blogs by economic scholars have become commonplace. Economics blogs, such as Freakonomics, Marginal Revolution, Paul Krugman and Greg Mankiw, have built large followings – whether measured by subscriptions in Google Reader or by average daily page views (1). Cowen (2008) argues that blogs are the main way that the general public consumes economics in a given day and guesstimates that “…about 400,000 people are reading economics blogs and digesting them” on a daily basis.

These blogs not only give their creators an outlet to disseminate their ideas and work immediately in a format that is more accessible, but also enable instant feedback, are easy to share on the open web, and allow the bloggers a personal style rather than the inaccessible format of academic journals (Glenn, 2003; Dunleavy and Gilson 2011).

Our motivation in examining the impact of economics blogs stems from two observations about blogs and questions that arise from these. First, it seems fair to state that “…informing is the core business of blogging.” (McKenna and Pole 2008, p. 102) This leads to the question of whether blogs improve the dissemination of research findings and whether their readers are indeed more informed (2). On the one hand, coupling the large readership of blogs with the argument of Cowen (2008) that the best ones are written at a level far higher than that of any major newspapers offers the promise that economics blogs may have sizeable effects on the dissemination of economic research and on the knowledge and attitudes of their readers.
Continue reading “The Impact of Economics Blogs”

Impact Evaluation for Development: Principles for Action

IE4D Group, January 2011. Available as pdf

“The authors of this paper come from a variety of perspectives. As scholars, practitioners, and commissioners of evaluation in development, research and philanthropy, our thematic interests, disciplines, geographic locale, and experiences may differ but we share a fundamental belief that evaluative knowledge has the potential to contribute to positive social change.

We know that the full potential of evaluation is not always (or even often) realized in international development and philanthropy. There are many reasons for this – some to do with a lack of capacity, some methodological, some due to power imbalances, and some the result of prevailing incentive structures. Evaluation, like development, needs to be an open and dynamic enterprise. Some of the current trends in evaluation, especially in impact evaluation in international development, limit unnecessarily the range of approaches to assessing the impact of development initiatives.

We believe that impact evaluation needs to draw from a diverse range of approaches if it is to be useful in a wide range of development contexts, rigorous, feasible, credible, and ethical.

Developed with support from the Rockefeller Foundation this article is a contribution to ongoing global and regional discussions about ways of realizing the potential of impact evaluation to improve development and strengthening our commitment to work towards it.”

Patricia Rogers is Professor of Public Sector Evaluation at the Royal Melbourne Institute of Technology, Australia. Her work focuses on credible and useful evaluation methods, approaches and systems for complicated and complex programs and policies.
Sanjeev Khagram is a professor of public affairs and international studies at the University of Washington as well as the Lead Steward of Innovations for Scaling Impact (iScale).
David Bonbright is founder and Chief Executive of Keystone (U.K., U.S. and South Africa), which helps organizations develop new ways of planning, measuring and reporting social change. He has also worked for the Aga Khan Foundation, Ford Foundation and Ashoka.
Sarah Earl is Senior Program Specialist in the Evaluation Unit at the International Development Research Centre (Canada). Her interest is ensuring that evaluation and research realize their full potential to contribute to positive social change.
Fred Carden is Director of Evaluation at the International Development Research Centre (Canada). His particular expertise is in the development and adaptation of evaluation methodology for the evaluation of development research.
Zenda Ofir is an international evaluation specialist, past President of the African Evaluation Association (AfrEA), former board member of the American Evaluation Association and the NONIE Steering Committee, and evaluation advisor to a variety of international organizations.
Nancy MacPherson is the Managing Director for Evaluation at the Rockefeller Foundation based in New York. The Foundation’s Evaluation Office aims to strengthen evaluative practice in philanthropy and development by supporting rigorous, innovative and context appropriate approaches to evaluation and learning.

Measuring Impact on the Immeasurable? Methodological Challenges in Evaluating Democracy and Governance Aid

by Jennifer Gauck, University of Kent, Canterbury – Department of Politics, 2011. APSA 2011 Annual Meeting Paper. Available as pdf

Abstract:

“Recent debates over the quality, quantity and purpose of development aid has led to a renewed emphasis on whether, and in what circumstances, aid is effective in achieving development outcomes. A central component of determining aid effectiveness is the conduct of impact evaluations, which assess the changes that can be attributed to a particular project or program. While many impact evaluations use a mixed-methods design, there is a perception that randomized control trials (RCTs) are promoted as the “gold standard” in impact evaluation. This is because the randomization process minimizes selection bias, allowing for the key causal variables leading to the outcome to be more clearly identified. However, many development interventions cannot be evaluated via RCTs because the nature of the intervention does not allow for randomization with a control group or groups.”

“This paper will analyze the methodological challenges posed by aid projects whose impacts cannot be evaluated using randomized control trials, such as certain democracy and governance (D&G) interventions. It will begin with a discussion of the merits and drawbacks of cross-sectoral methods and techniques commonly used to assess impact across a variety of aid interventions, including RCTs, and how these methods typically combine in an evaluation to tell a persuasive causal story. This paper will then survey the methods different aid donors are using to evaluate the impact of projects that cannot be randomized, such as governance-strengthening programs aimed at a centralized public-sector institution. Case studies will be drawn from examples in Peru and Indonesia, among others. This paper will conclude by analyzing how current methodological emphases in political science can be applied to impact evaluation processes generally, and to D&G evaluations specifically.”

RD Comment: See also the 3ie webpage on Useful resources for impact evaluations in governance which includes a list of relevant books, reports, papers, impact evaluations, systematic reviews, survey modules/tools and website

Micro-Methods in Evaluating Governance Interventions

This paper is available as a pdf.  It should be cited as follows: Garcia, M. (2011): Micro-Methods in Evaluating Governance Interventions. Evaluation Working Papers. Bonn: Bundesministerium für wirtschaftliche Zusammenarbeit und Entwicklung.

The aim of this paper is to present a guide to impact evaluation methodologies currently used in the field of governance. It provides an overview of a range of evaluation techniques – focusing specifically on experimental and quasi-experimental designs. It also discusses some of the difficulties associated with the evaluation of governance programmes and makes suggestions with the aid of examples from other sectors. Although it is far from being a review of the literature on all governance interventions where rigorous impact evaluation has been applied, it nevertheless seeks to illustrate the potential for conducting such analyses.

This paper has been produced by Melody Garcia, economist at the German Development Institute (Deutsches Institut für Entwicklungspolitik, DIE). It is a part of a two-year research project on methodological issues related to evaluating budget support funded by the BMZ’s evaluation division. The larger aim of the project is to contribute to the academic debate on methods of policy evaluation and to the development of a sound and theoretically grounded approach to evaluation. Further studies are envisaged.

%d bloggers like this: