ALNAP 8th Review of Humanitarian Action

The ALNAP Review of Humanitarian Action series aims to advance analysis and understanding of key trends and issues relating to humanitarian learning and accountability as a means of supporting improvement in sector-wide performance. The 8th Review contains three in-depth studies:

Chapter 1: Counting what counts: performance and effectiveness in the humanitarian sector [http://www.alnap.org/pool/files/8rhach1.pdf

Chapter 2: Improving humanitarian impact assessment: bridging theory and practice [http://www.alnap.org/pool/files/8rhach2.pdf]

Chapter 3: Innovations in International humanitarian action [http://www.alnap.org/pool/files/8rhach3.pdf

The first study is on humanitarian performance and provides a wide-ranging overview of the performance agenda – at the heart of ALNAP’s work – drawing on experiences from the private, public and development sectors. The second study focuses on improving humanitarian impact assessment, and provides a comprehensive framework to help bridge theory and practice in operational settings. The third study is a systematic review of innovations in international humanitarian response, which presents ways to think about and strengthen innovations across the sector.

Key Messages from ALNAP’s Eighth Review of Humanitarian Action: [http://www.alnap.org/pool/files/8rhakm-eng.pdf
Key messages in French and Spanish will be available shortly.

Pan African Monitoring and Evaluation Conference

Date: 27 – 31 July 2009
Venue: Premier Hotel, Pretoria, Gauteng, South Africa

The leaders of Africa continue to grapple with service delivery and are looking for ways to improve their capabilities and help them to achieve tangible and sustainable results.

“Now, more than ever, governments are being held accountable to their constituents for their expenditure,” explains Hennie Oosthuizen, CEO of the African Information Institute. “It is prudent for Africa’s leaders to embrace monitoring and evaluation in order for them to accurately assess the quality and impact of their work against their strategic plan.”

Monitoring and Evaluation (M&E) is a public management tool used to improve the way that government and other organizations achieve results. South African President, Jacob Zuma, has prioritised M&E through the establishment of an evaluation, monitoring and planning commission within the presidency, as well as in all government departments from national down to local level.
Continue reading “Pan African Monitoring and Evaluation Conference”

Assessing aid impact: a review of Norwegian evaluation practice

Authors: Espen Villanger ; Alf Morten Jerve
Published in: Journal of Development Effectiveness, Volume 1, Issue 2

June 2009 , pages 171 – 194

Warning: Unfortunately you have to pay to get access to this article in full

Abstract

This article reviews recent Norwegian aid evaluations with a mandate to study impact, and assesses how the evaluators establish causal effects. The analytical challenges encountered in the seven studies reviewed are: (1) the Terms of Reference ask for evidence of impact where this is not possible to identify, (2) the distinction between impacts of the aid element versus other components is often blurred, and (3) the methodological approaches to identify impact are either poorly developed or applied superficially. A main conclusion is that most of the evaluators did not have the necessary time or budget to conduct a proper impact evaluation given the large number of questions raised in the commissioning agency.

view references (26)

NAO Review – DFID: Progress in improving performance management

Publication date: 12 May 2009 . Full report (PDF – 366KB)

Executive Summary

1.  This brief review of the Department for International Development’s (DFID) performance management arrangements during 2008 is a follow-up to our 2002 VFM report on the same topic. It responds to a request from DFID’s Accounting Officer to re-visit the topic periodically, which the C&AG agreed would be valuable. It is based on a desk review of main documents, interviews with DFID staff, and a survey of staff and stakeholder views about evaluation (Appendix 3). We did not undertake a full audit of DFID systems, some of which are in the process of being revised, and we concentrated on those areas of DFID activity most directly related to its performance targets. We drew on recent DFID reviews of monitoring and evaluation, and our findings square well with the results of those reviews.
Continue reading “NAO Review – DFID: Progress in improving performance management”

Learning purposefully in capacity  development 

Why, what and when to measure?

An opinion paper prepared for IIEP,  by Alfredo Ortiz and Peter Taylor, INSTITUTE OF DEVELOPMENT STUDIES (IDS), 25th July, 2008

>>Full text<<

Abstract

Many capacity development (CD) programs and processes aim at long?term sustainable change,
which depends on seeing many smaller changes in at times almost invisible fields (rules, incentives,
behaviours, power, coordination etc.). Yet, most evaluation processes of CD tend to focus on short?
term outputs focused on clearly visible changes. This opinion paper will offer some ideas on how to
deal with this paradox, by examining how monitoring and evaluation (M&E) does, or could, make a difference to CD.  It explores whether there is something different and unique about M&E of CD that
isn’t addressed by predominant methods and ways of thinking about M&E, and which might be
better addressed by experimenting with learning?based approaches to M&E of CD.

Contents
1.  INTRODUCTION—WHAT SHOULD MONITORING &EVALUATION (M&E) TELL US ABOUT
CAPACITY DEVELOPMENT (CD)? …………………………… 1
2.  CAPACITY DEVELOPMENT MEANS AND ENDS—“WHAT ARE WE MEASURING AND WHEN
SHOULD WE MEASURE IT?”  …. 5
2.1.  IN SEARCH OF PERFORMANCE AND IMPACT .. 5
2.2.  STANDING CAPACITY  …. 10
3.  WHAT CAN WE LEARN FROM M & E OF CD DILEMMAS? … 13
3.1.  DEVELOPMENT BEING A PROCESS ALREADY IN MOTION …….. 13
3.2.  LINEAR VERSUS COMPLEX ADAPTIVE SYSTEMS (CAS) THINKING, PROGRAMMING AND MEASUREMENT ……. 14
3.3.  ATTRIBUTION … 17
3.4.  DONOR ACCOUNTING FOCUS VERSUS OPEN LEARNING APPROACHES .. 18
4.  CONCLUDING THOUGHTS  … 24
4.1.  INCORPORATION OF ORGANIZATIONAL LEARNING APPROACHES TO M&E OF CD … 26
4.2.  LARGE-SCALE EXPERIMENTATION AND ACTION RESEARCH . 27
4.3.  USE OF THEORY OF CHANGE (TOC) APPROACHES FOR DESIGNING M&E OF CD SYSTEMS ……… 28
4.3.1.  WHAT CAN A THEORY OF CHANGE OFFER? . 28
4.3.2.  HYPOTHETICAL EXAMPLE OF TOC USE IN EFA ….. 31
4.4.  CONCLUSION ……………………………………. 33
5.  ACRONYMS …………………………………… 34
6.  BIBLIOGRAPHY …………………………………… 35

Guidance on using the revised Logical Framework (DFID 2009)

Produced by the Value for Money Department, FCPD, February 2009.

>>Full text here<<

“The principal changes to the logframe from the earlier (2008) 4×4 matrix are:
•  The Objectively Verifiable Indicator  (OVI) box has been separated into its
component elements (Indicator, Baseline and Target), and Milestones added.
•  Means of Verification has been separated into ‘Source’.
•  Inputs are now quantified  in terms for funds (expressed in Sterling for DFID
and all partners) and in use of DFID staff time  (expressed as Full-Time
Equivalents (FTEs);
•  A Share box now indicates the financial value of DFID’s Inputs as a
percentage of the whole.
•  Assumptions are shown for Goal and Purpose only;
•  Risks are shown at Output and Activities level only;
•  At the Output level,  the Impact Weighting is now shown in the logframe
together with a Risk Rating for individual Outputs
•  Activities are now shown separately (so do not normally appear in the
logframe sent for approval), although  they can be added to the logframe
template if this is more suitable for your purposes.
•  Renewed emphasis on the use of disaggregated beneficiary data within
indicators, baselines and targets.”

BOND Quality Group – Debate on logframes

Date: 2-5.30pm 11th June 2009
Venue: NCVO offices, N1 9RL, London

For more information contact: Alex Jacobs <alex@keystoneaccountability.org>

Motion: this meeting believes that the logframe is the right tool for managing most NGO work

Logframes (Logical Framework Analysis) are very widely used in NGOs. But they split opinion sharply throughout the sector: some people love them, some hate them.

To their supporters, logframes provide a simple short way of summarising a project’s aims and activities. They force staff to map out the intermediary steps that link activities and overall goals. They can be applied at any level, from an entire organisation to one specific project. They help managers and donors alike by providing a guide to action and a set of indicators to monitor progress, which be can conveniently communicated to other people. Many different approaches can be used to create logframes, including participatory methods.

To their detractors, logframes force staff to think in an inappropriate way. They assume that complex social systems can be predicted in advance and that social problems reduced to a single problem statement. They do not take account of different people’s views and priorities (e.g. within communities), and they are based on an inappropriate linear logic (if A happens, then B will happen, then C). In practice, they are inflexible, creating a strait-jacket for relationships with partners and communities, which undermines outsiders’ ability to respond effectively to changing realities on the ground. They create bureaucratic paperwork, and are most useful for donors and senior managers.

What are the arguments and evidence for each side of the debate? Come along, listen to some expert opinion, debate the issues with your peers.

Speakers:

  • Proposing: Peter Kerby (DFID) & Claire Thomas (Minority Rights International)
  • Opposing: Robert Chambers (IDS) & Rick Davies (independent)

Presentations made by:

Voting Results (before and after debate)

Table 1: Votes before the debate
For Against Abstain Total
Women 9 14 1 24
38% 58% 4%
Men 3 5 1 9
33% 56% 11%
Total 12 19 2 33
36% 58% 6%
For Against Abstain Total
Large org 6 4 10
60% 40%
Small org 1 13 14
7% 93%
Total 7 17 24
29% 71%
Table 2: Votes after the debate
For Against Abstain Total
Women 6 13 1 20
30% 65% 5%
Men 2 4 1 7
29% 57% 14%
Total 8 17 2 27
30% 63% 7%
For Against Abstain Total
Large org 2 5 7
29% 71% 0%
Small org 2 11 13
15% 85% 0%
Total 4 16 0 20
20% 80% 0%

See also the summary of the BOND logframe debate, available at the BOND website

BEYOND SUCCESS STORIES: MONITORING & EVALUATION FOR FOREIGN ASSISTANCE RESULTS

EVALUATOR VIEWS OF CURRENT PRACTICE AND RECOMMENDATIONS FOR CHANGE.

This paper was produced independently by Richard Blue, Cynthia Clapp-Wincek, and Holly Benner. MAY 2009

See  Full-Report or Policy_Brief (draft for comment)

Findings derive from literature review, interviews with senior USG officials and primarily interviews with and survey responses from ‘external evaluators’—individuals who conduct evaluations of U.S. foreign assistance programs, either as part of consulting firms, non-governmental organizations (NGOs), or as individual consultants.   External evaluators were chosen because: 1) the authors are external evaluators themselves with prior USAID and State experience; 2) in recent years, the majority of evaluations completed of USG foreign assistance programs have been contracted out to external evaluation experts; and 3) evaluators are hired to investigate whether foreign assistance efforts worked, or didn’t work, and to ask why results were, or were not, achieved.  This gives them a unique perspective”

Key Findings – Monitoring

The role of monitoring is to determine the extent to which the expected outputs or outcomes of a program or activity are being achieved.  When done well, monitoring can be invaluable to project implementers and managers to make mid-course corrections to maximize project impact.  While monitoring requirements and practice vary across  U.S. agencies and departments, the following broad themes emerged from our research;

•  The role of monitoring in the USG foreign assistance community has changed dramatically in the last 15 years.  The role of USG staff has shifted to primarily monitoring contractors and grantees.  Because this distances USG staff from implementation of programs, it has resulted in the loss of dialogue, debate and learning within agencies.

•  The myriad of foreign assistance objectives requires a multiplicity of indicators. This has led to onerous reporting requirements that try to cover all bases.

•  There is an over reliance on quantitative indicators and outputs of deliverables over which the project implementers have control (such as number of people trained) rather than qualitative indicators and outcomes, expected changes in attitudes, knowledge, andbehaviors.

•  There is no standard guidance for monitoring foreign assistance programs—the requirements at MCC are very different  from those at DOS and USAID.  Some implementing agencies and offices have no guidance or standard procedures.

Key Findings – Evaluation

There is also great diversity in the evaluation policies and practices across USG agencies administering foreign assistance.  MCC has designed a very robust impact evaluation system for its country compacts, but these evaluations have yet to be completed. The Education and Cultural Affairs Bureau at the State Department has well respected evaluation efforts, but there is limited evaluation work in other bureaus and offices in the Department.  USAID has a long and rich evaluation history but neglect and lack of investment, as well as recent foreign assistance reformefforts, have stymied those functions.  The following themes emerged in our study:

The decision to evaluate: when, why and funding:

•  The requirements on the decision to evaluate vary across U.S. agencies. There is no policy or systematic guidance for what should be evaluated and why.  More than three quarters of Survey respondents emphasized the need to make evaluation a requirement and routine part of the foreign assistance programming cycle.

•  Evaluators rarely have the benefit of good baseline data for U.S. foreign assistance projects which makes it difficult to conduct rigorous outcome and impact evaluations that can attribute changes to the project’s investments.

•  While agencies require monitoring and evaluation plans as part of grantee contracts, insufficient funds are set aside for M&E, as partners are pressured to spend limited money on “non-programmatic” costs.

Executing an evaluation:

•  Scopes of work for evaluation often reflect a mismatch between evaluation questions that must be answered and methodology, budget and timeframe given for an evaluation.

•  Because of limited budget and time, the majority of respondents felt  that evaluations were not sufficiently rigorous to provide credible evidence for impact or sustainability.

Impact and utilization of evaluation:

•  Training on M&E is limited across USG agencies.  Program planning, monitoring and evaluation are not included in standard training for State Department Foreign Service Officers or senior managers, a particular challenge when FSOs and Ambassadors become the in-country decision makers on foreign assistance programs.

•  Evaluations do not contribute to agency-wide or interagency knowledge. If “learning” takes place, it is largely confined to the immediate operational unit that commissioned the evaluation rather than contributing to a larger body of knowledge on effective policies and programs.

•  Two thirds of external evaluators polled agreed or strongly agreed that USAID cares more about success stories than careful evaluation.

•  Bureaucratic incentives do not support rigorous evaluation or use of findings – with the possible exception of MCC which supports evaluation but does not yet have a track record on use of findings.

•  Evaluation reports are often too long or technical to be accessible to policymakers and agency leaders with limited time.

Create a Center for Monitoring and Evaluation

A more robust M&E and learning culture for foreign assistance results will not occur without the commitment of USG interagency leadership and authoritative guidance.  Whether or not calls to consolidate agencies and offices disbursing foreign assistance are heeded, the most efficient and effective way to accomplish this learning transformation would be to establish an independent Center for Monitoring and Evaluation (CME), reporting to the Office of the Secretary of State or the Deputy Secretary of State for Management and Resources.  The Center would be placed within the Secretary or Deputy Secretary’s Office to ensure M&E efforts become a central feature of foreign assistance decision-making…”

See the remaining text in the Policy_Brief



A brief summary and links to IDEAS global assembly held in Jo-Burg, March 2009

provided by Denis Jobin (IDEAS VP  2006-2009), in a posting on the MandE NEWS email list…

Getting to Results: Evaluation Capacity Building and Development
The IDEAS Global Assembly, Birchwood Hotel, Johannesburg, South Africa, March 17-20, 2009.

The International Development Evaluation Association‘s Global Assembly focused on the issues involved in evaluation capacity building, and how such efforts can strengthen the evidence available to countries to inform their own development. Capacity building has been recognized for a decade or more as crucial to development. The measurement (and management) issues embedded in generating and disseminating evaluative information are now understood to be critical to informing decision making. The Global Assembly explored these topics with the objective of clarifying present knowledge on evaluation capacity building, learning lessons from development evaluation experience, and understanding the challenges faced by development evaluators in taking these efforts forward.

The theme of the global assembly underscores the role that evaluative knowledge can play in development in general, and the importance of building and sustaining the capacity to bring evaluative knowledge into the decision making process so as to enhance the achievement of results.

The papers presented at the global assembly may be grouped according to several “themes”.We have provided below the links to those papers which we have been able to make available.

Building Evaluation Capacity in Response to the Paris Declaration and the Accra Agenda for Action

This strand focuses on the commitments made in the Paris Declaration and the Accra Agenda for Action to strengthen monitoring and evaluation systems in order to track development performance. Documents

Institutional capacity building

When the focus is on institutional capacity building, issues of supply versus demand in the public, private, and NGO/CSO sectors of society are immediately apparent. The role of evaluation associations, standards for evaluation performance, the role of credentials, national evaluation policies, and incentives for quality evaluations are all significant issues. Documents

Regional Responses/Regional Strategies for building Evaluation Capacity

This topic examines regional efforts and strategies being deployed to strengthen evaluation capacity. Documents

Country / Sector Specific Responses for Building Evaluation Capacity

Case Studies from a range of countries and organizations provide insight into different ways of building Capacity. Documents

Evaluation Capacity Building — Tools, Techniques, and Strategies

Capacity building involves multiple tools, techniques, and strategies. This topic examines the success (or not) of these different components of capacity building.  Documents

The Measurement and Assessment of Evaluation Capacity Building

This topic examines the experiences and the efforts made to actually evaluate capacity building. The assessments describe and analyze the methodological choices and their implications. Qualitative, quantitative methods and mixed methods are analysed, and any unintended consequences are examined.  Documents

Country-Led Evaluation

A series of case studies analyse and evaluate Monitoring and Evaluation efforts in a range of countries. Documents

Metaevaluation revisited, by Michael Scriven

An Editorial in Journal of MultiDisciplinary Evaluation, Volume 5, Number 11, January 2009

In this short and readable paper Michael Scriven addresses “three categories of issues that arise about meta-evaluation: (i) exactly what is it; (ii) how is it justified; (iii) when and how should it be used? In the following, I say something about all three—definition, justification, and application.” He then makes seven main points, each of which he elaborates on in some detail:

  1. Meta-evaluation is the consultant’s version of peer review.
  2. Meta-evaluation is the proof that evaluators believe what they say.
  3. In meta-evaluation, as in  all evaluation, check the pulse before trimming the nails.
  4. A partial meta-evaluation is better than none.
  5. Make the most of meta-evaluation.
  6. Any systematic approach to evaluation—in other words, almost any kind of professional evaluation—automatically provides a systematic basis for meta-evaluation.
  7. Fundamentally, meta-evaluation, like evaluation, is simply an extension of common sense—and that’s the first defense to use against the suggestion that it’s some kind of fancy academic embellishment.
%d bloggers like this: