UK Independent Commission for Aid Impact – Work Plan

Independent Commission for Aid Impact – Work Plan, and the associated Press Release (12 May 2011)

1. This document introduces the Independent Commission for Aid Impact’s first work plan, setting out the reports we envisage initiating over the next three years, from May 2011 to May 2014.

2. Our mandate permits us to examine all UK Government programmes
funded by Official Development Assistance expenditure. In 20091, this
represented £7.4bn, which was spent through bilateral, joint and
multilateral processes by the Department for International Development (DFID) and at least eight other branches of Government. Under the Government’s current plans and guided by its recent reviews of bilateral, multilateral and humanitarian work, this expenditure is due to rise significantly and will change in focus. This range of projects and programmes gives us significant discretion in choosing where to focus the attention of our reports.” ..continues..

See also: FRAMEWORK AGREEMENT BETWEEN THE DEPARTMENT FOR INTERNATIONAL DEVELOPMENT (DFID) AND THE INDEPENDENT COMMISSION FOR AID IMPACT (ICAI) This document sets out the broad framework within which the ICAI will operate as a permanent body (12 May 2011 – 11 May 2015). The Agreement is signed by the Chief Commissioner of the ICAI and DFID. This document, and any future revisions, will be made public on the ICAI website.

[RD Comments: The workplan has three strands of work:

  • Evaluations: are likely to focus on the sustainable development impact achieved by programmes against initial or updated objectives
  • Value for money reviews: will consider whether objectives have been achieved with the optimal use of resources
  • Investigations: could range from general fact-finding in response to external requests, to assessments of compliance with legal and policy responsibilities and examinations of alleged corruption cases.

Regarding the first strand, the OECD DAC definition of impact is “Positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended.” One practical way of defining long term would be as any change observed after the completion of a project (typically 3 years). This would seem to be an appropriate focus for the ICAI because in the past DFID has undertaken very few ex-post evaluations. There is a gap here that needs to be addressed, as there is with quite a few other bilateral agencies.

A further justification lies in the useful connection with value for money reviews. Some organisations, like Global Environmental Facility, define impact as “A fundamental and durable change in the condition of people and their environment brought about by the project”  in other words, a sustained change.  The longer a change is sustained (all other things being equal) the more value for money would seem to have been realised. Assessing impact in the short term (i.e. during the project implementation period) risks understating impact and the associated value for money.

The question that then arises in my mind is to what extent will the ICAI program of evaluations be focused on projects that have been completed, versus those which are still being implemented. I will be asking the ICAI if they could  provide an answer, for example in the form of the percentage of completed versus incompleted projects to be examined in each of the 8 evaluations to be undertaken in year 1]

PS 24 May 2011: See also Howard White’s related question about the ICAI’s use of ex-post and ex-ante evaluations. There he seems to be arguing against ex-post evaluations: “There is a question as to whether the commission restricts itself to ex-post evaluations, done once the intervention is being implemented or completed. Or can it engage in ex-ante designs before the intervention has started? Designing the evaluation prior to the launch of a programme, and collecting baseline data, generally delivers more robust findings.”

This seems like the method tail wagging the programme development dog. Or, looking for a lost wallet under a lamppost.The potential for rigour should not determine what gets evaluated. What gets evaluated should be decided by more strategic considerations. Like the fact that we know very little about the long term effects of most development projects (where long term = after the project interevention ceases).

WRITING TERMS OF REFERENCE FOR AN EVALUATION: A HOW-TO GUIDE

Independent Evaluation Group, World Bank 2011. Available as pdf.

“The terms of reference (ToR) document defines all aspects of how a consultant or a team will conduct an evaluation. It defines the objectives and the scope of the evaluation, outlines the responsibilities of the consultant or team, and provides a clear description of the resources available to conduct the study. Developing an accurate and wellspecified ToR is a critical step in managing a high-quality evaluation. The evaluation ToR document serves as the basis for a contractual arrangement with one or more evaluators and sets the parameters against which the success of the assignment can be measured.

The specific content and format for a ToR will vary to some degree based on organizational requirements, local practices, and the type of
assignment. However, a few basic principles and guidelines inform the development of any evaluation ToR. This publication provides userfriendly guidance for writing ToRs by covering the following areas:

1. Definition and function. What is a ToR? When is one needed? What are its objectives? This section also highlights how an evaluation ToR is different from other ToRs.
2. Content. What should be included in a ToR? What role(s) will each of the sections of the document serve in supporting and facilitating the completion of a high-quality evaluation?
3. Preparation. What needs to be in place for a practitioner or team to develop the ToR for an evaluation or review?
4. Process. What steps should be taken to develop an effective ToR? Who should be involved for each of these steps?

A quality checklist and some Internet resources are included in this publication to foster good practice in writing ToRs for evaluations and reviews of projects and programs. The publication also provides references and resources for further information.”

[RD Comment: See also: Guidance on Terms of Reference for an Evaluation: A List, listing ToRs guidance documents produced by 9 different organisations]

Towards a Plurality of Methods in Project Evaluation: A Contextualised Approach to Understanding Impact Trajectories and Efficacy

Michael Woolcock, January 2009, BWPI Working Paper 73

Abstract
“Understanding the efficacy of development projects requires not only a plausible counterfactual, but an appropriate match between the shape of impact trajectory over time and the deployment of a corresponding array of research tools capable of empirically discerning such a trajectory. At present, however, the development community knows very little, other than by implicit assumption, about the expected shape of the impact trajectory from any given sector or project type, and as such is prone to routinely making attribution errors. Randomisation per se does not solve this problem. The sources and manifestations of these problems are considered, along with some constructive suggestions for responding to them. ”

Michael Woolcock is Professor of Social Science and Development Policy, and Research Director of the Brooks World Poverty Institute, at the University of Manchester.

[RD Comment: Well worth reading, more than once]

PS: See also the more recent “Guest Post: Michael Woolcock on The Importance of Time and Trajectories in Understanding Project Effectiveness” on the Development Impact blog, 5th May 2011

Promoting Voice and Choice: Exploring Innovations in Australian NGOAccountability for Development Effectiveness

– Exploring innovations in Australian NGO accountability for development effectiveness

by Chris Roche, ACFID research paper, 2010

From the Preface

“This research paper represents the latest chapter in a body of work, led by ACFID’s Development Practice Committee (DPC), focused on Australian NGO program quality and effectiveness. Over the past 10 years DPC has engaged the sector in a series of consultations and discrete research phases to define our effectiveness and identify the principles, program strategies, standards of engagement and organisational management practices which underpin it.

The objective of the current research was to capture and share cutting edge practice in demonstrating Australian NGO effectiveness through innovative forms of accountability and social learning, in which the views of those who are ultimately meant to benefit were central. ACFID member agencies participated through submitting examples of their attempts to improve downward accountability.

The findings presented in this report will contribute to ACFID member agencies’ journey of continual improvement of our collective effectiveness. It will do this through engaging with senior NGO managers and AusAID in the analysis of the findings, as well as contributing to the international work on CSO Development Effectiveness. The next research phase will be in partnership with an academic institution to undertake a more rigorous examination of a sample of the case studies and the organisational enablers and obstacles to improving our effectiveness.”

See also Chris Roche’s new guest posting on the (Australian based) Development Policy Centre’s Development Policy Blog, titled “Changing the rules of the game?” In this blog he follows up on issues raised in the above paper.

Learning how to learn: eight lessons for impact evaluations that make a difference

ODI Background Notes, April 2011. Authors: Ben Ramalingam

This Background Note outlines key lessons on impact evaluations, utilisation-focused evaluations and evidence-based policy. While methodological pluralism is seen as the key to effective impact evaluation in development, the emphasis here is not methods per se. Instead, the focus is on the range of factors and issues that need to be considered for impact evaluations to be used in policy and practice – regardless of the method employed. This Note synthesises research by ODI, ALNAP, 3ie and others to outline eight key lessons for consideration by all of those with an interest in impact evaluation and aid effectiveness”.  8 pages

The 8 lessons:
Lesson 1:  Understand the key stakeholders
Lesson 2:  Adapt the incentives
Lesson 3:  Invest in capacities and skills
Lesson 4:  Define  impact  in ways  that  relate  to  the specific context
Lesson 5:  Develop the right blend of methodologies
Lesson 6:  Involve those who matter in the decisions that matter
Lesson 7:  Communicate effectively
Lesson 8:  Be persistent and lexible

See also Ben’s Thursday, April 14, 2011 blog posting: When will we learn how to learn?

[RD comments on this paper]

1.     The case for equal respect for different methodologies can be overstated. I feel this is the case when Ben argues that “First, it has been shown that the knowledge that results from any type of particular impact evaluation methodology is no more rigorous or widely applicable than the results from any other kind of methodology.”  While it is important that evaluation results affect subsequent policy and practice their adoption and use is not the only outcome measure for evaluations. We also want those evaluation results have some reliability and validity, that will stand the test of time and be generalisable to other settings with some confidence. An evaluation could affect policy and practice without necessarily being good quality , defined in terms of reliability and valdity.

  • Nevertheless, I like Ben’s caution about focusing too much on evaluations as outputs and the need to focus more on outcomes, the use and uptake of evaluations.

    2.     The section of Ben’s paper that most attracted my interest was the story about the Joint Evaluation of Emergency Assistance to Rwanda, and how the evaluation team managed to ensure it became “one of the most influential evaluations in the aid sector”. We need more case studies of these kinds of events and then a systematic review of those case studies.

    3.     When I read statements various like this: “As well as a supply of credible evidence, effort needs to be made to understand the demand for evidence” I have an image in my mind of evaluators as humble supplicants, at the doorsteps of the high and mighty. Isn’t it about time that evaluators turned around and started demanding that policy makers disclose the evidence base of their existing policies? As I am sure has been said by others before, when you look around there does not seem to be much evidence of evidence based policy making. Norms and expectations need to be built up, and then there may be more interest in what evaluations have to say. A more assertive and questioning posture is needed.

    NAO report: DFID Financial Management Report

    NAO Press Release 6 April 2011…

    “Sound financial management will be essential at the Department for International Development as its spending increases by a third over the next four years, according to the National Audit Office.

    The Department has improved its core financial management and has an ambitious programme underway to improve its focus on value for money. It has put important building blocks in place; however its financial management is not yet mature.   The Department cannot yet assess important aspects of the value for money of the aid it has delivered, at an aggregated level.

    The Department’s programme budget will grow by £3.3 billion from 2010-11 to 2014-15 (34 per cent in real terms). At the same time, its administration budget is going to reduce by a third. The Department will face significant financial and operational challenges, making sound financial management essential.

    The Department has increased the number of finance professionals it employs, but this expertise needs to be used more effectively across the business. In addition, new financial information systems do not yet provide the data needed to support well-founded decisions and forecasts are still an area of weakness.

    Having conducted a thorough review, the Department now has a high level plan allocating its resources on the basis of the results it aims to achieve.  Along with actions to strengthen measurement of aid projects, this has the potential to help strengthen the focus on aid results and value for money. But key risks need to be managed and the Department should now develop a coherent, single strategy for doing so.

    With greater spending in higher risk locations and more fragile states, the Department must do more to assure itself that it minimises fraud and corruption risks. Although the level of reported fraud is low, it is likely to be under-reported. The NAO has found that the investigation of fraud is reactive and the Department does not attempt to quantify its estimated likely fraud losses.

    Amyas Morse, head of the National Audit Office, said today:

    “The Department knows its increase in funding, and new approach to aiding developing countries, brings challenges. This report shows considerable progress is being made, but a better information environment is needed to deal with the heightened levels of assurance required in targeting future aid at higher risk locations”

    [RD comment] The Executive Summary ends with a section titled: Conclusion on value for money, which says:

    • We recognise that the Department has been improving its core financial management and has also been strengthening its focus on value for money at all levels of the organisation, including through a step change in its approach to the strategic allocation of resources based on expected results. Important building blocks have been put in place, but key gaps in financial management maturity remain. The changes the Department has introduced to-date are positive, and provide a platform to address the challenges that will come with its increased spending.
    • At present, however, the Department’s financial management is not mature. The  Department’s forecasting remains inaccurate and its risk management is not yet fully embedded. Weaknesses in the measurement of value for money at project level, variability in the quality and coverage of data, and lack of integration in core systems, mean that the Department cannot assess important aspects of value for money of the aid it has delivered, at an aggregated level. The Department now needs to develop a coherent single strategy to address the weaknesses identified and the key risks to meeting its objectives.

    Sound expectations: from impact evaluations to policy change

    3ie Working paper # 12, 2011, by Center for the Implementation of Public Policies Promoting Equity and Growth (CIPPEC) Emails: vweyrauch@cippec.org, gdiazlangou@cippec.org

    Abstract

    “This paper outlines a comprehensive and flexible analytical conceptual framework to be used in the production of a case study series. The cases are expected to identify factors that help or hinder rigorous impact evaluations (IEs) from influenc ing policy and improving policy effectiveness. This framework has been developed to be adaptable to the reality of developing countries. It is aimed as an analytical-methodological tool which should enable researchers in producing case studies which identify factors that affect and explain impact evaluations’ policy influence potential. The approach should also enable comparison between cases and regions to draw lessons that are relevant beyond the cases themselves.

    There are two different , though interconnected, issues that must be dealt with while discussing the policy influence of impact evaluations. The first issue has to do with the type of policy influence pursued and, aligned with this, the determination of the accomplishment (or not) of the intended influence. In this paper, we first introduce the discussion regarding the different types of policy influence objectives that impact evaluations usually pursue, which will ultimately help determine whether policy influence was indeed achieved. This discussion is mainly centered around whether an impact evaluation has had impact on policy. The second issue is related to the identification of the factors and forces that mediate the policy influence efforts and is focused on why the influence was achieved or not. We have identified and systematized the mediating factors and forces, and we approach them in this paper from the demand and supply perspective, considering as well, the intersection between these two.

    The paper concludes that, ultimately, the fulfillment of policy change based on the results of impact evaluations is determined by the interplay of the policy influenc e objectives with the factors that affect the supply and demand of research in the policymaking process.

    The paper is divided in four sections. A brief introduction is followed by an analysis of policy influence as an objective of research, specifically, impact evaluations. The third section identifies factors and forces that enhance or undermine influence in public policy decision making. The research ends up pointing out the importance of measuring policy influence and enumerates a series of challenges that have to be further assessed.”

    IMPACT AND AID EFFECTIVENESS: Mapping the Issues and their Consequences

    [from the IDS Virtual Bulletin, March 2011]

    Introduction
    In this virtual Bulletin we bring together ten articles dating from across three decades. They all address Impact. From the outset, we note that there are a number of common threads and ideas that stretch across all the articles:

    • The implicit emphasis of all the articles on complexity
    • The breadth and depth of impact analysis, from the national level to the individual
    • The importance of knowing the audience for any evaluation or impact assessment
    • The virtuous cycle that can be created by using insights into impact to adjust interventions
    • The dependency of that virtuous cycle on participation and engagement of programme staff and clients.

    What we notice, however, is how the articles framing these issues vary according to discipline and research site. We also see how some ongoing preoccupations have been shaped by their proximity to other debates or policy concerns. Our hope is that hindsight will provide some perspective for practice and policy going forward.
    View Full Introduction

    Articles
    A Revolution Whose Time Has Come? The Win-Win of Quantitative Participatory Approaches and Methods
    IDS Bulletin Volume 41, Issue 6, November 2010
    Robert Chambers

    Impact of Microfinance on Rural Households in the Philippines
    IDS Bulletin Volume 39, Issue 1, March 2008
    Toshio Kondo, Aniceto Orbeta, Clarence Dingcong and Christine Infantado

    You Can Get It If You Really Want’: Impact Evaluation Experience of the Office of Evaluation and Oversight of the Inter-American Development Bank
    IDS Bulletin Volume 39, Issue 1, March 2008
    Inder Jit Ruprah

    The Role of Evaluation in Accountability in Donor-Funded Projects
    IDS Bulletin Volume 31, Issue 1, January 2000
    Adebiyi Edun

    Micro-Credit Programme Evaluation: A Critical Review†
    IDS Bulletin Volume 29, Issue 4, October 1998
    Shahidur R. Khandker

    Macroeconomic Evaluation of Programme Aid: A Conceptual Framework
    IDS Bulletin Volume 27, Issue 4, October 1996
    Howard White

    Measurement of Poverty and Poverty of Measurement
    IDS Bulletin Volume 25, Issue 2, April 1994
    Martin Greeley

    Developing Effective Study Programmes for Public Administrators
    IDS Bulletin Volume 8, Issue 4, May 2009
    Ron Goslin

    Improving the Effectiveness of Evaluation in Rural Development Projects
    IDS Bulletin Volume 8, Issue 1, July 1976
    B. H. Kinsey

    Managing Rural Development
    IDS Bulletin, Volume 6, Issue 1, September 1974
    Robert Chambers

    Behavioral economics and randomized trials: trumpeted, attacked and parried

    This is the title of a blog posting by Chris Blattman, which points to and comments on a debate  in the Boston Review, March/April 2011

    The focus of the debate is an article by Rachel Glennerster and Michael Kremer, titled Small Changes, Big Results:  Behavioral Economics at Work in Poor Countries

    “Behavioral economics has changed the way we implement public policy in the developed world. It is time we harness its approaches to alleviate poverty in developing countries as well.”

    This article is part of Small Changes, Big Results, a forum on applying behavioral economics to global development. This includes the following 7 responses to Glennerster and  Kremer, and their response.

    Diane Coyle: There’s nothing irrational about rising prices and falling demand. (March 14)

    Eran Bendavid: Randomized trials are not infallible—just look at medicine. (March 15)

    Pranab Bardhan: As the experimental program becomes its own kind of fad, other issues in development are being ignored. (March 16)

    José Gómez-Márquez: We want to empower locals to invent, so they can be collaborators, not just clients. (March 17)

    Chloe O’Gara:  You can’t teach a child to read with an immunization schedule. (March 17)

    Jishnu Das, Shantayanan Devarajan, and Jeffrey S. Hammer:Even if experiments show us what to do, can we rely on government action? (March 18)

    Daniel N. Posner: We cannot hope to understand individual behavior apart from the community itself. (March 21)

    Rachel Glennerster and Michael Kremer reply: Context is important, and meticulous experimentation can improve our understanding of it. (March 22)

    PS (26th March 2011: See also Ben Goldacre’s Bad Science column in today’s Guardian: Unlikely boost for clinical trials (/When ethics committees kill)

    “At present there is a bizarre paradox in medicine. When there is no evidence on which treatment is best, out of two available options, then you can choose one randomly at will, on a whim, in clinic, and be subject to no special safeguards. If, however, you decide to formally randomise in the same situation, and so generate new knowledge to improve treatments now and in the future, then suddenly a world of administrative obstruction opens up before you.

    This is not an abstract problem. Here is one example. For years in A&E, patients with serious head injury were often treated with steroids, in the reasonable belief that this would reduce swelling, and so reduce crushing damage to the brain, inside the fixed-volume box of your skull.

    Researchers wanted to randomise unconscious patients to receive steroids, or no steroids, instantly in A&E, to find out which was best. This was called the CRASH trial, and it was a famously hard fought battle with ethics committees, even though both treatments – steroids, or no steroids – were in widespread, routine use. Finally, when approval was granted, it turned out that steroids were killing patients.”

    Common Needs Assessments and humanitarian action

    by Richard Garfield, with Courtney Blake, Patrice Chatainger and Sandie Walton-Ellery. HPN Network Paper No.69, January 2011

    “Five years ago, the field of needs assessments resembled a tower of Babel. Each agency had its own unproven survey forms and made their own assessments based on little field-based information. At times there was little discussion between agencies about what constituted the major needs and the best response monitoring approach in a particular emergency.

    Funds for emergency humanitarian action have doubled each decade during the last 30 years. Meanwhile, the Good Humanitarian Donorship initiative and humanitarian reform call for greater accountability and effectiveness on the basis of evidence. Without assessing the needs of those affected more accurately, accountability and effectiveness will not be possible. But assessments are often completed far too late, and provide far too little useful information, to guide funding decisions or provide a comparative base for monitoring during recovery. At its best, a common inter-agency, inter-sectoral needs assessment helps to develop a better joint understanding of needs, capabilities, and appropriate response.

    Network Paper 69 summarises the basic characteristics of a Common Needs Assessment (CNA), reviews experience in using assessments in recent years and highlights the problems encountered. This paper demonstrates what CNAs can achieve, details their limitations and provides an overview of steps to address common problems.  It hopes to produce better, more useful and more timely assessments, contributing to improved humanitarian response.”

    %d bloggers like this: