DFID Draft Structural Reform Plan July 2010

Available  on the DFID website and as a pdf.

“Structural Reform Plans are the key tool of the Coalition Government for making departments accountable for the implementation of the reforms set out in the Coalition Agreement. They replace the old, top-down systems of targets and central micromanagement.

The reforms set out in each department’s SRP are designed to turn government on its head, taking power away from Whitehall and putting it into the hands of people and communities. Once these reforms are in place, people themselves will have the power to improve our country and our public services, through the mechanisms of local democratic accountability, competition, choice, and social action.

The reform plans set out in this document are consistent with and form part of the Department’s contribution to the Spending Review. All departmental spending is subject to the Spending Review.

We have adopted a cautious view of the timescales for delivering all legislative measures due to the unpredictability of pressures on Parliamentary time.”
Continue reading “DFID Draft Structural Reform Plan July 2010”

EVALUATING DEVELOPMENT CO-OPERATION: SUMMARY OF KEY NORMS AND STANDARDS. SECOND EDITION

OECD DAC NETWORK ON DEVELOPMENT EVALUATION, February 2010 Download a pdf copy

“The DAC Network on Development Evaluation is a unique international forum that brings together evaluation managers and specialists from development co-operation agencies in OECD member countries and multilateral development institutions. Its goal is to increase the effectiveness of international development programmes by supporting robust, informed and independent evaluation.

A key component of the Network’s mission is to develop internationally agreed norms and standards to strengthen evaluation policy and practice. Shared standards contribute to harmonised approaches in line with the commitments of the Paris Declaration on Aid Effectiveness. The body of norms and standards is based on experience, and evolves over time to fit the changing aid environment. These principles serve as an international reference point, guiding efforts to improve development results through high quality evaluation.

The norms and standards summarised here should be applied discerningly and adapted carefully to fit the purpose, object and context of each evaluation. This summary document is not an exhaustive evaluation manual. Readers are encouraged to refer to the complete texts available on the DAC Network on Development Evaluation’s website: www.oecd.org/dac/evaluationnetwork. Several of the texts are also available in other languages.”

DEVELOPMENT EVALUATION RESOURCES AND SYSTEMS – A STUDY OF NETWORK MEMBERS

The DAC Network on Development Evaluation,  OECD, 2010. Download a pdf copy

“Introduction

In June 2009, the Organisation for Economic Co-operation and Development (OECD) Development Assistance Committee (DAC) Network on Development Evaluation agreed to undertake a study of its members’ evaluation systems and resources. The study aims to take stock of how the evaluation function is managed and resourced in development agencies and to identify major trends and current challenges in development evaluation. The purpose is to inform efforts to strengthen evaluation systems in order to contribute to improved accountability and better development results. It will be of interest to DAC members and evaluation experts, as well as to development actors in emerging donor and partner countries.

To capture a broad view of how evaluation works in development agencies, core elements of the evaluation function are covered, including: the mandate for central evaluation units, the institutional position of evaluation, evaluation funding and human resources, independence of the evaluation process, quality assurance mechanisms, co-ordination with other donors and partner countries, systems to facilitate the use of evaluation findings and support to partner country capacity development.

This report covers the member agencies of the OECD DAC Network on Development Evaluation.1 See Box 1 for a full list of member agencies and abbreviations. Covering all major bilateral providers of development assistance and seven important multilateral development banks, the present analysis therefore provides a comprehensive view of current policy and practice in the evaluation of development assistance.

The study is split into two sections: section I contains an analysis of overall trends and general practices, drawing on past work of the DAC and its normative work on development evaluation. Section II provides an individual factual profile for each member agency, highlighting its institutional set-up and resources.”

“Full transparency and new independent watchdog will give UK taxpayers value for money in aid”

Copied from the DFID website, 3rd June 2010:

[Please post your Comments below and/or on the Guardian Katine website ]

“British taxpayers will see exactly how and where overseas aid money is being spent and a new independent watchdog will help ensure this aid is good value for money, International Development Secretary Andrew Mitchell has announced.

In his first major speech as Development Secretary, Mr Mitchell said he had taken the key steps towards creating an independent aid watchdog to ensure value for money. He also announced a new UKaid Transparency Guarantee to ensure that full information on all DFID’s spending is published on the departmental website.

The information will also be made available to the people who benefit from aid funding: communities and families living in the world’s poorest countries.

These moves come as part of a wider drive to refocus DFID’s work so British taxpayers’ money is spent transparently and on key priority issues such as maternal mortality and disease prevention.”

In Mr Mitchell’s speech, delivered at the Royal Society with Oxfam and Policy Exchange, he argued that overseas aid is both morally right and in Britain’s national interest but that taxpayers need to see more evidence their money is being spent well. Continue reading ““Full transparency and new independent watchdog will give UK taxpayers value for money in aid””

NZAID 2008 Evaluations and Reviews: Annual Report on Quality, 2009

Prepared by Miranda Cahn, Evaluation Advisor, Strategy, Advisory and Evaluation Group, NZAID, Wellington, August 2009. Available online

Executive Summary

Introduction

The New Zealand Agency for International Development (NZAID) is committed to improving evaluative activity1, including evaluations and reviews. Since 2005 NZAID has undertaken annual desk studies of the evaluations and reviews completed by NZAID during the previous calendar year. This 2009 study assesses the quality of 29 NZAID commissioned evaluations and reviews that were submitted to the NZAID Evaluation and Review Committee (ERC) during 2008, and their associated Terms of Reference (TOR). The study identifies areas where quality is of a high standard, and areas where improvement is needed. Recommendations are made on how improvements to NZAID commissioned evaluations and reviews could be facilitated.

The objectives of the study are to:

• assess the quality of the TOR with reference to the NZAID Guidelines on Developing TOR for Reviews and Evaluations

• assess the quality of the NZAID 2008 evaluation and review with reference to the NZAID Evaluation Policy, relevant NZAID Guidelines and Development Assistance Committee of Organisation for Economic Cooperation and Development (DAC) Evaluation Quality Standards

• identify, describe and discuss key quality aspects of the TOR and evaluation and review reports that were of a high standard and those that should be improved in future. Continue reading “NZAID 2008 Evaluations and Reviews: Annual Report on Quality, 2009”

Assessing aid impact: a review of Norwegian evaluation practice

Authors: Espen Villanger ; Alf Morten Jerve
Published in: Journal of Development Effectiveness, Volume 1, Issue 2

June 2009 , pages 171 – 194

Warning: Unfortunately you have to pay to get access to this article in full

Abstract

This article reviews recent Norwegian aid evaluations with a mandate to study impact, and assesses how the evaluators establish causal effects. The analytical challenges encountered in the seven studies reviewed are: (1) the Terms of Reference ask for evidence of impact where this is not possible to identify, (2) the distinction between impacts of the aid element versus other components is often blurred, and (3) the methodological approaches to identify impact are either poorly developed or applied superficially. A main conclusion is that most of the evaluators did not have the necessary time or budget to conduct a proper impact evaluation given the large number of questions raised in the commissioning agency.

view references (26)

NAO Review – DFID: Progress in improving performance management

Publication date: 12 May 2009 . Full report (PDF – 366KB)

Executive Summary

1.  This brief review of the Department for International Development’s (DFID) performance management arrangements during 2008 is a follow-up to our 2002 VFM report on the same topic. It responds to a request from DFID’s Accounting Officer to re-visit the topic periodically, which the C&AG agreed would be valuable. It is based on a desk review of main documents, interviews with DFID staff, and a survey of staff and stakeholder views about evaluation (Appendix 3). We did not undertake a full audit of DFID systems, some of which are in the process of being revised, and we concentrated on those areas of DFID activity most directly related to its performance targets. We drew on recent DFID reviews of monitoring and evaluation, and our findings square well with the results of those reviews.
Continue reading “NAO Review – DFID: Progress in improving performance management”

Guidance on using the revised Logical Framework (DFID 2009)

Produced by the Value for Money Department, FCPD, February 2009.

>>Full text here<<

“The principal changes to the logframe from the earlier (2008) 4×4 matrix are:
•  The Objectively Verifiable Indicator  (OVI) box has been separated into its
component elements (Indicator, Baseline and Target), and Milestones added.
•  Means of Verification has been separated into ‘Source’.
•  Inputs are now quantified  in terms for funds (expressed in Sterling for DFID
and all partners) and in use of DFID staff time  (expressed as Full-Time
Equivalents (FTEs);
•  A Share box now indicates the financial value of DFID’s Inputs as a
percentage of the whole.
•  Assumptions are shown for Goal and Purpose only;
•  Risks are shown at Output and Activities level only;
•  At the Output level,  the Impact Weighting is now shown in the logframe
together with a Risk Rating for individual Outputs
•  Activities are now shown separately (so do not normally appear in the
logframe sent for approval), although  they can be added to the logframe
template if this is more suitable for your purposes.
•  Renewed emphasis on the use of disaggregated beneficiary data within
indicators, baselines and targets.”

BEYOND SUCCESS STORIES: MONITORING & EVALUATION FOR FOREIGN ASSISTANCE RESULTS

EVALUATOR VIEWS OF CURRENT PRACTICE AND RECOMMENDATIONS FOR CHANGE.

This paper was produced independently by Richard Blue, Cynthia Clapp-Wincek, and Holly Benner. MAY 2009

See  Full-Report or Policy_Brief (draft for comment)

Findings derive from literature review, interviews with senior USG officials and primarily interviews with and survey responses from ‘external evaluators’—individuals who conduct evaluations of U.S. foreign assistance programs, either as part of consulting firms, non-governmental organizations (NGOs), or as individual consultants.   External evaluators were chosen because: 1) the authors are external evaluators themselves with prior USAID and State experience; 2) in recent years, the majority of evaluations completed of USG foreign assistance programs have been contracted out to external evaluation experts; and 3) evaluators are hired to investigate whether foreign assistance efforts worked, or didn’t work, and to ask why results were, or were not, achieved.  This gives them a unique perspective”

Key Findings – Monitoring

The role of monitoring is to determine the extent to which the expected outputs or outcomes of a program or activity are being achieved.  When done well, monitoring can be invaluable to project implementers and managers to make mid-course corrections to maximize project impact.  While monitoring requirements and practice vary across  U.S. agencies and departments, the following broad themes emerged from our research;

•  The role of monitoring in the USG foreign assistance community has changed dramatically in the last 15 years.  The role of USG staff has shifted to primarily monitoring contractors and grantees.  Because this distances USG staff from implementation of programs, it has resulted in the loss of dialogue, debate and learning within agencies.

•  The myriad of foreign assistance objectives requires a multiplicity of indicators. This has led to onerous reporting requirements that try to cover all bases.

•  There is an over reliance on quantitative indicators and outputs of deliverables over which the project implementers have control (such as number of people trained) rather than qualitative indicators and outcomes, expected changes in attitudes, knowledge, andbehaviors.

•  There is no standard guidance for monitoring foreign assistance programs—the requirements at MCC are very different  from those at DOS and USAID.  Some implementing agencies and offices have no guidance or standard procedures.

Key Findings – Evaluation

There is also great diversity in the evaluation policies and practices across USG agencies administering foreign assistance.  MCC has designed a very robust impact evaluation system for its country compacts, but these evaluations have yet to be completed. The Education and Cultural Affairs Bureau at the State Department has well respected evaluation efforts, but there is limited evaluation work in other bureaus and offices in the Department.  USAID has a long and rich evaluation history but neglect and lack of investment, as well as recent foreign assistance reformefforts, have stymied those functions.  The following themes emerged in our study:

The decision to evaluate: when, why and funding:

•  The requirements on the decision to evaluate vary across U.S. agencies. There is no policy or systematic guidance for what should be evaluated and why.  More than three quarters of Survey respondents emphasized the need to make evaluation a requirement and routine part of the foreign assistance programming cycle.

•  Evaluators rarely have the benefit of good baseline data for U.S. foreign assistance projects which makes it difficult to conduct rigorous outcome and impact evaluations that can attribute changes to the project’s investments.

•  While agencies require monitoring and evaluation plans as part of grantee contracts, insufficient funds are set aside for M&E, as partners are pressured to spend limited money on “non-programmatic” costs.

Executing an evaluation:

•  Scopes of work for evaluation often reflect a mismatch between evaluation questions that must be answered and methodology, budget and timeframe given for an evaluation.

•  Because of limited budget and time, the majority of respondents felt  that evaluations were not sufficiently rigorous to provide credible evidence for impact or sustainability.

Impact and utilization of evaluation:

•  Training on M&E is limited across USG agencies.  Program planning, monitoring and evaluation are not included in standard training for State Department Foreign Service Officers or senior managers, a particular challenge when FSOs and Ambassadors become the in-country decision makers on foreign assistance programs.

•  Evaluations do not contribute to agency-wide or interagency knowledge. If “learning” takes place, it is largely confined to the immediate operational unit that commissioned the evaluation rather than contributing to a larger body of knowledge on effective policies and programs.

•  Two thirds of external evaluators polled agreed or strongly agreed that USAID cares more about success stories than careful evaluation.

•  Bureaucratic incentives do not support rigorous evaluation or use of findings – with the possible exception of MCC which supports evaluation but does not yet have a track record on use of findings.

•  Evaluation reports are often too long or technical to be accessible to policymakers and agency leaders with limited time.

Create a Center for Monitoring and Evaluation

A more robust M&E and learning culture for foreign assistance results will not occur without the commitment of USG interagency leadership and authoritative guidance.  Whether or not calls to consolidate agencies and offices disbursing foreign assistance are heeded, the most efficient and effective way to accomplish this learning transformation would be to establish an independent Center for Monitoring and Evaluation (CME), reporting to the Office of the Secretary of State or the Deputy Secretary of State for Management and Resources.  The Center would be placed within the Secretary or Deputy Secretary’s Office to ensure M&E efforts become a central feature of foreign assistance decision-making…”

See the remaining text in the Policy_Brief



Training in Evaluation of Humanitarian Action

Date: 21st-24th June 2009
Venue: Belgium

Channel Research and the Active Learning Network for Accountability and Performance (ALNAP) are inviting participants for Training in Evaluation of Humanitarian Action, Belgium, 21st-24th June 2009 (actual training dates 22nd-24th June 2009).

This course is an introductory-to-intermediate level course and has the overall aim of assisting participants in the design of monitoring systems, and to be able to commission, manage, carry out and use small scale evaluations in humanitarian action. This 3-day training course will use the OECD-DAC evaluation criteria but also introduces new evaluation material specifically on joint evaluations and innovative learning processes as part of an evaluation process.

Continue reading “Training in Evaluation of Humanitarian Action”

%d bloggers like this: