“The DAC Network on Development Evaluation is a unique international forum that brings together evaluation managers and specialists from development co-operation agencies in OECD member countries and multilateral development institutions. Its goal is to increase the effectiveness of international development programmes by supporting robust, informed and independent evaluation.

A key component of the Network’s mission is to develop internationally agreed norms and standards to strengthen evaluation policy and practice. Shared standards contribute to harmonised approaches in line with the commitments of the Paris Declaration on Aid Effectiveness. The body of norms and standards is based on experience, and evolves over time to fit the changing aid environment. These principles serve as an international reference point, guiding efforts to improve development results through high quality evaluation.

The norms and standards summarised here should be applied discerningly and adapted carefully to fit the purpose, object and context of each evaluation. This summary document is not an exhaustive evaluation manual. Readers are encouraged to refer to the complete texts available on the DAC Network on Development Evaluation’s website: www.oecd.org/dac/evaluationnetwork. Several of the texts are also available in other languages.”


The DAC Network on Development Evaluation,  OECD, 2010. Download a pdf copy


In June 2009, the Organisation for Economic Co-operation and Development (OECD) Development Assistance Committee (DAC) Network on Development Evaluation agreed to undertake a study of its members’ evaluation systems and resources. The study aims to take stock of how the evaluation function is managed and resourced in development agencies and to identify major trends and current challenges in development evaluation. The purpose is to inform efforts to strengthen evaluation systems in order to contribute to improved accountability and better development results. It will be of interest to DAC members and evaluation experts, as well as to development actors in emerging donor and partner countries.

To capture a broad view of how evaluation works in development agencies, core elements of the evaluation function are covered, including: the mandate for central evaluation units, the institutional position of evaluation, evaluation funding and human resources, independence of the evaluation process, quality assurance mechanisms, co-ordination with other donors and partner countries, systems to facilitate the use of evaluation findings and support to partner country capacity development.

This report covers the member agencies of the OECD DAC Network on Development Evaluation.1 See Box 1 for a full list of member agencies and abbreviations. Covering all major bilateral providers of development assistance and seven important multilateral development banks, the present analysis therefore provides a comprehensive view of current policy and practice in the evaluation of development assistance.

The study is split into two sections: section I contains an analysis of overall trends and general practices, drawing on past work of the DAC and its normative work on development evaluation. Section II provides an individual factual profile for each member agency, highlighting its institutional set-up and resources.”

Measuring Empowerment? Ask Them

Quantifying qualitative outcomes from people’s own analysis. Insights for results-based management from the experience of a social movement in Bangladesh Dee Jupp Sohel Ibn Ali with contribution from Carlos Barahona 2010: Sida Studies in Evaluation. Download pdf


Participation has been widely taken up as an essential element of development, but participation for what purpose? Many feel that its acceptance, which has extended to even the most conventional of institutions such as the international development banks, has resulted in it losing its teeth in terms of the original ideology of being able to empower those living in poverty and to challenge power relations.

The more recent emergence of the rights-based approach discourse has the potential to restore the ‘bite’ to participation and to re-politicise development. Enshrined in universal declarations and conventions, it offers a palatable route to accommodating radicalism and creating conditions for emancipatory and transformational change, particularly for people living in poverty. But an internet search on how to measure the impact of these approaches yields a disappointing harvest of experience. There is a proliferation of debate on the origins and processes, the motivations and pitfalls of rights-based programming but little on how to know when or if it works. The discourse is messy and confusing and leads many to hold up their hands in despair and declare that outcomes are intangible, contextual, individual, behavioural, relational and fundamentally un-quantifiable!

As a consequence, results-based management pundits are resorting to substantive measurement of products, services and goods which demonstrate outputs and rely on perception studies to measure outcomes.

However, there is another way. Quantitative analyses of qualitative assessments of outcomes and impacts can be undertaken with relative ease and at low cost. It is possible to measure what many regard as unmeasurable.

This publication suggests that steps in the process of attainment of rights and the process of empowerment are easy to identify and measure for those active in the struggle to achieve them. It is our etic perspectives that make the whole thing difficult. When we apply normative frames of reference, we inevitably impose our values and our notions of democracy and citizen engagement rather than embracing people’s own context-based experience of empowerment.

This paper presents the experience of one social movement in Bangladesh, which managed to find a way to measure empowerment by letting the members themselves explain what benefits they acquired from the Movement and by developing a means to measure change over time. These measures , which are primarily of use to the members, have then been subjected to numerical analysis outside of the village environment to provide convincing quantitative data, which satisfies the demands of results-based management.

The paper is aimed primarily at those who are excited by the possibilities of rights-based approaches but who are concerned about proving that their investment results in measurable and attributable change. The experience described here should build confidence that transparency, rigour and reliability can be assured in community led approaches to monitoring and evaluation without distorting the original purpose, which is a system of reflection for the community members themselves. Hopefully, the reader will feel empowered to challenge the sceptics.

Dee Jupp and Sohel Ibn Ali
Continue reading “Measuring Empowerment? Ask Them”

Next Generation Network Evaluation

Paper published June 2010. Produced by Innovations for Scaling Impact and Keystone Accountability. Funded by the International Development Research Center and the Packard Foundation. (Download pdf version here)

“Purpose: This paper reviews the current field of network monitoring and evaluation with the goal of identifying where progress has been made and where further work is still needed. It proposes a framework for network impacts planning, assessment, reporting and learning that can help to close some of the current gaps in network evaluation while building on the advances that have been made. This document is written for practitioners undertaking network evaluation and foundation program staff working to support networks. Continue reading “Next Generation Network Evaluation”

Beyond Logframe: Using Systems Concepts in Evaluation

March 2010. Nobuko Fujita (Ed)  Foundation for Advanced Studies on International Development (FASID) Available as pdf

“Editor’s Note: The 2010 Issues and Prospects of Evaluations for International Development employs systems concepts as clues to re-assess the conventional ways of conducting evaluations and to explore how development evaluation can potentially be made more useful.

In Japan, development evaluation predominantly relies on the Logical Framework (logframe) when conducting evaluations. Evaluations based on a logframe often face difficulties. One such difficulty arises from the futile attempt to develop an evaluation framework based on a logframe, which, in many cases, was prepared as part of the early-stage planning of the project and which then does not necessarily reflect a project’s real situation at the time of evaluation. Although a logframe can be utilised initially as a tentative project plan, logframes are rarely revised even when the situation has changed. By the end of the project, the original logframe may not be an accurate embodiment of what the project is about and therefore logframes do not particularly help in terminal or ex-post evaluations.

Still, having been institutionalized by clients, logframe-based evaluations are common practice and in extreme cases, evaluators face the danger of evaluating the logframe instead of the actual project. Although widely used for its simplicity, logframes can end up becoming a cumbersome tool, or even a hindrance to evaluation.

Various attempts have been made to overcome the limitations of the logframe and some aid organizations such as USAID, UNDP, CIDA and the World Bank have shifted from the logframe to Results-Based Management (RBM). Now GTZ  is in the process of shifting to a new project management approach designed on RBM and systems ideas.

In the first article, “Beyond logframe: Critique, Variations and Alternatives,” Richard Hummelbrunner, an evaluator/consultant from Austria, sums up the critique of logframe and the Logical Framework Approach (LFA), and explores some variations employed to overcome specific shortcomings of LFA. He then outlines a systemic alternative to logframe  and introduces the new GTZ management model for sustainable development called “Capacity WORKS.” Richard has dealt with LFA and possible alternatives to LFA at various points along his career, and he is currently involved in GTZ’s rollout of Capacity WORKS as it becomes the standard management model for all BMZ 5 projects and programmes.

What does he mean by “systemic alternative”? In the second article, “Systems Thinking and Capacity Development in the International Arena,” Bob Williams, a consultant and an expert in systems concepts, explains what “thinking systemically” is about and how it might help evaluation. He boils down systems ideas into three core concepts (inter-relationships, perspectives, and boundaries), and relates these concepts to various systems methods.

In December 2009, FASID offered a training course and a seminar on this topic in Tokyo. Through the exchange of numerous e-mails with the instructors prior to the seminar, it occurred to me that the concepts might be more easily understood presented as a conversation. That is what we tried to do in the third article, “Using Systems Concepts in Evaluation – A Dialogue with Patricia Rogers and Bob Williams –.” These two instructors of the FASID training course and workshop explain in simple conversational style where and how we can start applying systems concepts in development evaluation.

This issue also carries a report of two collaborative evaluations of Japanese Official Development Assistance (ODA) projects. The first case presents an innovative joint evaluation conducted collaboratively with Vietnamese stakeholders. The evaluation took place in 2009 – 2010 as the last year of a three-year evaluation capacity development project coordinated by the Japan International Cooperation Agency. The second case covers a joint evaluation study of another Japanese ODA project in Lao PDR with a local Lao administration for which neither logframe nor OECD DAC five criteria was used. Instead, an evaluation framework was developed from scratch, based entirely on the beneficiaries’ interests and perspectives. In both cases, a partner country’s participation in the evaluation necessitated considerable changes in perspectives of evaluation practice. I hope they provide examples of how boundaries and perspectives, as discussed theoretically in the first three articles, relate to development evaluation in practice.”

The Global Evaluation Conclave: Making Evaluation matter

Date: 25-28 October 2010
Venue: The Lalit Hotel , New Delhi, India

“Making Evaluation Matter” is the theme of the conclave. The theme embodies an idea of evaluation that starts with relevance and context, with a strong understanding of who evaluation should serve.

The event will attract global thinkers engaged in cutting edge evaluation research, theorizing, or practice who seek opportunities to push their thinking in new directions and are interested in applying ideas in a South Asian context. It will include leading development theorists, activists and policy makers from South Asia to embed discussions in current development issues and contexts that evaluation must respond to.

The event will be a space to engage with, and test knowledge. The conclave will include around 200 leaders from the global development and evaluation community.

50 + speakers, 24+ hours of knowledge building and networking, 4 power packed days, 1 Great venue, Countless new evaluation opportunities.

Programme schedule | Read workshop description
Click here to download registration form

“Full transparency and new independent watchdog will give UK taxpayers value for money in aid”

Copied from the DFID website, 3rd June 2010:

[Please post your Comments below and/or on the Guardian Katine website ]

“British taxpayers will see exactly how and where overseas aid money is being spent and a new independent watchdog will help ensure this aid is good value for money, International Development Secretary Andrew Mitchell has announced.

In his first major speech as Development Secretary, Mr Mitchell said he had taken the key steps towards creating an independent aid watchdog to ensure value for money. He also announced a new UKaid Transparency Guarantee to ensure that full information on all DFID’s spending is published on the departmental website.

The information will also be made available to the people who benefit from aid funding: communities and families living in the world’s poorest countries.

These moves come as part of a wider drive to refocus DFID’s work so British taxpayers’ money is spent transparently and on key priority issues such as maternal mortality and disease prevention.”

In Mr Mitchell’s speech, delivered at the Royal Society with Oxfam and Policy Exchange, he argued that overseas aid is both morally right and in Britain’s national interest but that taxpayers need to see more evidence their money is being spent well. Continue reading ““Full transparency and new independent watchdog will give UK taxpayers value for money in aid””

“Impact 2.0: Collaborative technologies connecting research and policy”

“Impact 2.0: Collaborative technologies connecting research and policy” is a two-year research project that seeks to develop a body of knowledge about the use of Web 2.0 in policy-oriented research and design in Latin America and to identify, document and promote good practices and emerging opportunities related to the use of collaborative technologies for linking research to policy.

In order to achieve this goal Impact 2.0 has two components. The first involves three pilot projects that seek to combine to combine current theory on the relationship between research, policy and advocacy with advances in Web 2.0/Social networking technologies and practices. The second is a fund to support research into the use of Web 2.0 tools and behaviours to link research and policy.

The research fund will consider two types of proposals: Type 1 projects will involve both implementing a specific intervention using Web 2.0 to link policy and analyzing, documentating and evaluating the intervention while Type 2 projects will document and evaluate one or more current or recent projects making use of Web 2.0 to link policy and research. A maximum of US$15,000 is available for Type 1 projects and US$7,500 for Type 2.

Independent researchers and organisations (universities, government agencies, NGOs, research centres) are invited to apply. Applicants must reside in Latin America and the projects to be performed and analyzed must also be located in and relevant to the region.

Full details are available in the attached CFPs or in Spanish at http://impacto2.comunica.org and in English at http://impacto2.comunica.org/?page_id=23

Impacto 2.0 is a project of Fundaci�n Comunica, with the financial support of the International Development Research Centre (IDRC) and the participation of APC, DIRSI, PRODIC of the University of the Republic of Uruguay and CIESPAL.

| Bruce Girard | www.comunica.org |
| tel: +598 2 410.2979 | mobile: +598 99 189.652 |
| Dr. Pablo de Mar�a 1036 | Montevideo, Uruguay |

trainings in Evaluation of Humanitarian Action

Date: 14-16 June, 2010 and 17-18 June, 2010
Venue: near Brussels

Evaluation of Humanitarian Action with ALNAP (Active Learning Network for Accountability and Performance in Humanitarian Action)

14-16 June, 2010

This course is an introductory-to- intermediate level course and has the overall aim of making evaluations of humanitarian action more effective in contributing to the improved performance of interventions and to improve the quality of the evaluation process. This 3-day training course is based on an update of the ALNAP training modules. The course will also introduce some new material, specifically:

  • on joint evaluations: the rationale, experience and learning to date, interwoven throughout the training programme
  • on evaluating policy as well as projects and programmes
  • on innovative learning processes as part of the evaluation process.

Continue reading “trainings in Evaluation of Humanitarian Action”

The Implementation of the Paris Declaration on Aid Effectiveness: Where do We Stand and How to Move Forward?

Date: 15th -16th April 2010
Venue: Martin’s Central Park Hotel, Boulevard Charlemagne 80, Brussels:

The Paris Declaration (PD), adopted in 2005, lays down principles and procedures for enhancing the effectiveness of aid and specifies them in twelve targets supplied with monitorable indicators and to be achieved by 2010. The quantitative assessment by the OECD of the progress towards 2010 reveals considerable delays of donors and partner countries. With the global economic crisis and growing concern about climate change the targets of the PD have become a great challenge.

EADI in cooperation with the Institute of Development Policy and Management (IOB) will hold a 2 days intensive training workshop in Brussels for development professionals and practitioners. This training workshop is part of the EADI Masterclasses. Continue reading “The Implementation of the Paris Declaration on Aid Effectiveness: Where do We Stand and How to Move Forward?”