Narrative Research

David Snowden, 2010.  21 pages. Available as pdf, from the Cognitive Edge site

“Narrative Research, … lays the foundation for the use of narrative research and inquiry methods not only in the project but broadly in the field of research and consultancy…. Elements of it together with general material on Complexity Theory will be published as a chapter in a book on Naturalising Decision Making in the Fall of 2010.”

Listen First: a pilot system for managing downward accountability in NGOs

Alex Jacobs and Robyn Wilford. Development in Practice, Volume 20, Number 7, September 2010 Available as pdf

“Abstract: This article reports on a research project intended to develop systematic ways of managing downward accountability in an international NGO. Innovative tools were developed and trialled in six countries. The tools comprised a framework, de?ning downward accountability in practical terms, and three management processes. They were successfully used to
(a) encourage staff to improve downward accountability in ways relevant to their context;
(b) hear bene?ciaries’ assessments of the level of accountability achieved and the value of the NGO’s work; and (c) generate quanti?ed performance summaries for managers. Taken together, they form a coherent draft management system. Areas for further research are identied.”

There’s more related material  at www.listenfirst.org,

The Limits of Nonprofit Impact: A Contingency Framework for Measuring Social Performance

Alnoor Ebrahim, V. Kasturi Rangan, Social Enterprise Initiative, Harvard Business School (2010) Working Paper 10-099 Available as pdf

ABSTRACT

“Leaders of organizations in the social sector are under growing pressure to demonstrate their impacts on pressing societal problems such as global poverty. We review the debates around performance and impact, drawing on three literatures: strategic philanthropy, nonprofit management, and international development. We then develop a contingency framework for measuring results, suggesting that some organizations should measure long-term impacts, while others should focus on shorter-term outputs and outcomes. In closing, we discuss the implications of our analysis for future research on performance management.”

Smart Tools: For evaluating information projects, products and services

Produced by CTA, KIT, IICD. 2nd (2009) edition

PDF version available online

“About the Toolkit

The Smart Toolkit focuses on the evaluation of information projects, products and services from a learning perspective. It looks at evaluation within the context of the overall project cycle, from project planning and implementation to monitoring, evaluation and impact assessment, and then at the evaluation process itself, the tools involved and examples of their application.The theme running throughout the toolkit is:

Participatory evaluation for learning and impact Continue reading “Smart Tools: For evaluating information projects, products and services”

EVALUATING DEVELOPMENT CO-OPERATION: SUMMARY OF KEY NORMS AND STANDARDS. SECOND EDITION

OECD DAC NETWORK ON DEVELOPMENT EVALUATION, February 2010 Download a pdf copy

“The DAC Network on Development Evaluation is a unique international forum that brings together evaluation managers and specialists from development co-operation agencies in OECD member countries and multilateral development institutions. Its goal is to increase the effectiveness of international development programmes by supporting robust, informed and independent evaluation.

A key component of the Network’s mission is to develop internationally agreed norms and standards to strengthen evaluation policy and practice. Shared standards contribute to harmonised approaches in line with the commitments of the Paris Declaration on Aid Effectiveness. The body of norms and standards is based on experience, and evolves over time to fit the changing aid environment. These principles serve as an international reference point, guiding efforts to improve development results through high quality evaluation.

The norms and standards summarised here should be applied discerningly and adapted carefully to fit the purpose, object and context of each evaluation. This summary document is not an exhaustive evaluation manual. Readers are encouraged to refer to the complete texts available on the DAC Network on Development Evaluation’s website: www.oecd.org/dac/evaluationnetwork. Several of the texts are also available in other languages.”

DEVELOPMENT EVALUATION RESOURCES AND SYSTEMS – A STUDY OF NETWORK MEMBERS

The DAC Network on Development Evaluation,  OECD, 2010. Download a pdf copy

“Introduction

In June 2009, the Organisation for Economic Co-operation and Development (OECD) Development Assistance Committee (DAC) Network on Development Evaluation agreed to undertake a study of its members’ evaluation systems and resources. The study aims to take stock of how the evaluation function is managed and resourced in development agencies and to identify major trends and current challenges in development evaluation. The purpose is to inform efforts to strengthen evaluation systems in order to contribute to improved accountability and better development results. It will be of interest to DAC members and evaluation experts, as well as to development actors in emerging donor and partner countries.

To capture a broad view of how evaluation works in development agencies, core elements of the evaluation function are covered, including: the mandate for central evaluation units, the institutional position of evaluation, evaluation funding and human resources, independence of the evaluation process, quality assurance mechanisms, co-ordination with other donors and partner countries, systems to facilitate the use of evaluation findings and support to partner country capacity development.

This report covers the member agencies of the OECD DAC Network on Development Evaluation.1 See Box 1 for a full list of member agencies and abbreviations. Covering all major bilateral providers of development assistance and seven important multilateral development banks, the present analysis therefore provides a comprehensive view of current policy and practice in the evaluation of development assistance.

The study is split into two sections: section I contains an analysis of overall trends and general practices, drawing on past work of the DAC and its normative work on development evaluation. Section II provides an individual factual profile for each member agency, highlighting its institutional set-up and resources.”

Measuring Empowerment? Ask Them

Quantifying qualitative outcomes from people’s own analysis. Insights for results-based management from the experience of a social movement in Bangladesh Dee Jupp Sohel Ibn Ali with contribution from Carlos Barahona 2010: Sida Studies in Evaluation. Download pdf

Preamble

Participation has been widely taken up as an essential element of development, but participation for what purpose? Many feel that its acceptance, which has extended to even the most conventional of institutions such as the international development banks, has resulted in it losing its teeth in terms of the original ideology of being able to empower those living in poverty and to challenge power relations.

The more recent emergence of the rights-based approach discourse has the potential to restore the ‘bite’ to participation and to re-politicise development. Enshrined in universal declarations and conventions, it offers a palatable route to accommodating radicalism and creating conditions for emancipatory and transformational change, particularly for people living in poverty. But an internet search on how to measure the impact of these approaches yields a disappointing harvest of experience. There is a proliferation of debate on the origins and processes, the motivations and pitfalls of rights-based programming but little on how to know when or if it works. The discourse is messy and confusing and leads many to hold up their hands in despair and declare that outcomes are intangible, contextual, individual, behavioural, relational and fundamentally un-quantifiable!

As a consequence, results-based management pundits are resorting to substantive measurement of products, services and goods which demonstrate outputs and rely on perception studies to measure outcomes.

However, there is another way. Quantitative analyses of qualitative assessments of outcomes and impacts can be undertaken with relative ease and at low cost. It is possible to measure what many regard as unmeasurable.

This publication suggests that steps in the process of attainment of rights and the process of empowerment are easy to identify and measure for those active in the struggle to achieve them. It is our etic perspectives that make the whole thing difficult. When we apply normative frames of reference, we inevitably impose our values and our notions of democracy and citizen engagement rather than embracing people’s own context-based experience of empowerment.

This paper presents the experience of one social movement in Bangladesh, which managed to find a way to measure empowerment by letting the members themselves explain what benefits they acquired from the Movement and by developing a means to measure change over time. These measures , which are primarily of use to the members, have then been subjected to numerical analysis outside of the village environment to provide convincing quantitative data, which satisfies the demands of results-based management.

The paper is aimed primarily at those who are excited by the possibilities of rights-based approaches but who are concerned about proving that their investment results in measurable and attributable change. The experience described here should build confidence that transparency, rigour and reliability can be assured in community led approaches to monitoring and evaluation without distorting the original purpose, which is a system of reflection for the community members themselves. Hopefully, the reader will feel empowered to challenge the sceptics.

Dee Jupp and Sohel Ibn Ali
Continue reading “Measuring Empowerment? Ask Them”

Guidance on Terms of Reference for an Evaluation: A List

This is the beginning of a new page that will list various sources of guidance on the development of Terms of Reference for an evaluation.

If you have suggestions for any additions (or edits) to this list please use the Comment function below.

Please also see the hundreds of examples of actual ToRs (and related docs) in the MandE NEWS Jobs Forum

PS: Jim Rugh has advised me (5 June 2010) that “two colleagues at the Evaluation Center at Western Michigan University are undertaking an extensive review of RFPs / ToRs they’ve seen posted on various listservs; they intend to publish a synthesis, critique and recommendations for criteria to make them more realistic and appropriate.

Next Generation Network Evaluation

Paper published June 2010. Produced by Innovations for Scaling Impact and Keystone Accountability. Funded by the International Development Research Center and the Packard Foundation. (Download pdf version here)

“Purpose: This paper reviews the current field of network monitoring and evaluation with the goal of identifying where progress has been made and where further work is still needed. It proposes a framework for network impacts planning, assessment, reporting and learning that can help to close some of the current gaps in network evaluation while building on the advances that have been made. This document is written for practitioners undertaking network evaluation and foundation program staff working to support networks. Continue reading “Next Generation Network Evaluation”

Beyond Logframe: Using Systems Concepts in Evaluation

March 2010. Nobuko Fujita (Ed)  Foundation for Advanced Studies on International Development (FASID) Available as pdf

“Editor’s Note: The 2010 Issues and Prospects of Evaluations for International Development employs systems concepts as clues to re-assess the conventional ways of conducting evaluations and to explore how development evaluation can potentially be made more useful.

In Japan, development evaluation predominantly relies on the Logical Framework (logframe) when conducting evaluations. Evaluations based on a logframe often face difficulties. One such difficulty arises from the futile attempt to develop an evaluation framework based on a logframe, which, in many cases, was prepared as part of the early-stage planning of the project and which then does not necessarily reflect a project’s real situation at the time of evaluation. Although a logframe can be utilised initially as a tentative project plan, logframes are rarely revised even when the situation has changed. By the end of the project, the original logframe may not be an accurate embodiment of what the project is about and therefore logframes do not particularly help in terminal or ex-post evaluations.

Still, having been institutionalized by clients, logframe-based evaluations are common practice and in extreme cases, evaluators face the danger of evaluating the logframe instead of the actual project. Although widely used for its simplicity, logframes can end up becoming a cumbersome tool, or even a hindrance to evaluation.

Various attempts have been made to overcome the limitations of the logframe and some aid organizations such as USAID, UNDP, CIDA and the World Bank have shifted from the logframe to Results-Based Management (RBM). Now GTZ  is in the process of shifting to a new project management approach designed on RBM and systems ideas.

In the first article, “Beyond logframe: Critique, Variations and Alternatives,” Richard Hummelbrunner, an evaluator/consultant from Austria, sums up the critique of logframe and the Logical Framework Approach (LFA), and explores some variations employed to overcome specific shortcomings of LFA. He then outlines a systemic alternative to logframe  and introduces the new GTZ management model for sustainable development called “Capacity WORKS.” Richard has dealt with LFA and possible alternatives to LFA at various points along his career, and he is currently involved in GTZ’s rollout of Capacity WORKS as it becomes the standard management model for all BMZ 5 projects and programmes.

What does he mean by “systemic alternative”? In the second article, “Systems Thinking and Capacity Development in the International Arena,” Bob Williams, a consultant and an expert in systems concepts, explains what “thinking systemically” is about and how it might help evaluation. He boils down systems ideas into three core concepts (inter-relationships, perspectives, and boundaries), and relates these concepts to various systems methods.

In December 2009, FASID offered a training course and a seminar on this topic in Tokyo. Through the exchange of numerous e-mails with the instructors prior to the seminar, it occurred to me that the concepts might be more easily understood presented as a conversation. That is what we tried to do in the third article, “Using Systems Concepts in Evaluation – A Dialogue with Patricia Rogers and Bob Williams –.” These two instructors of the FASID training course and workshop explain in simple conversational style where and how we can start applying systems concepts in development evaluation.

This issue also carries a report of two collaborative evaluations of Japanese Official Development Assistance (ODA) projects. The first case presents an innovative joint evaluation conducted collaboratively with Vietnamese stakeholders. The evaluation took place in 2009 – 2010 as the last year of a three-year evaluation capacity development project coordinated by the Japan International Cooperation Agency. The second case covers a joint evaluation study of another Japanese ODA project in Lao PDR with a local Lao administration for which neither logframe nor OECD DAC five criteria was used. Instead, an evaluation framework was developed from scratch, based entirely on the beneficiaries’ interests and perspectives. In both cases, a partner country’s participation in the evaluation necessitated considerable changes in perspectives of evaluation practice. I hope they provide examples of how boundaries and perspectives, as discussed theoretically in the first three articles, relate to development evaluation in practice.”

%d bloggers like this: