RESOURCE PACK ON SYSTEMATIZATION OF EXPERIENCES

ActionAid International, 2009, 104 pages.  Available as pdf 3.39Mb

See also the associated AAI website on systematization

Systematization is a methodology that offers a way to do all of the above. It allows us to:

  • Organise and document what we have learnt through our work
  • Better understand the impact of our work and the ways in which change happens
  • Develop deeper understanding about our work and the challenges we face to inform new ways of working
  • Capture and communicate the complexity and richness of our work
Systematization “helps people involved in different kinds of practice to organize and communicate what they have learned. We are talking about …so called …. lessons learned, about which everybody talks nowadays, but are not so easy to produce.” (AAI systematization resource pack, pg. 1, 2009)”

Critique of Governance Assessment Applications

GRDC Helpdesk Research Report by Sumedh Rao, Governance and Social Development Resource Centre, July 2010. 16 pages. Available as pdf

Query:  Identify the key literature that critiques the use and application of governance assessments.  Enquirer: DFID

Contents
1. Overview
2. General critiques
3. Critiques of measurement
4. Worldwide Governance Indicators (WGI)
5. African Peer Review Mechanism (APRM)
6. Other assessments
7. Donor Guidance
8. Initiatives for improving assessments

Including a bibliography of 39 annotated references Continue reading “Critique of Governance Assessment Applications”

Narrative Research

David Snowden, 2010.  21 pages. Available as pdf, from the Cognitive Edge site

“Narrative Research, … lays the foundation for the use of narrative research and inquiry methods not only in the project but broadly in the field of research and consultancy…. Elements of it together with general material on Complexity Theory will be published as a chapter in a book on Naturalising Decision Making in the Fall of 2010.”

Listen First: a pilot system for managing downward accountability in NGOs

Alex Jacobs and Robyn Wilford. Development in Practice, Volume 20, Number 7, September 2010 Available as pdf

“Abstract: This article reports on a research project intended to develop systematic ways of managing downward accountability in an international NGO. Innovative tools were developed and trialled in six countries. The tools comprised a framework, de?ning downward accountability in practical terms, and three management processes. They were successfully used to
(a) encourage staff to improve downward accountability in ways relevant to their context;
(b) hear bene?ciaries’ assessments of the level of accountability achieved and the value of the NGO’s work; and (c) generate quanti?ed performance summaries for managers. Taken together, they form a coherent draft management system. Areas for further research are identied.”

There’s more related material  at www.listenfirst.org,

The Limits of Nonprofit Impact: A Contingency Framework for Measuring Social Performance

Alnoor Ebrahim, V. Kasturi Rangan, Social Enterprise Initiative, Harvard Business School (2010) Working Paper 10-099 Available as pdf

ABSTRACT

“Leaders of organizations in the social sector are under growing pressure to demonstrate their impacts on pressing societal problems such as global poverty. We review the debates around performance and impact, drawing on three literatures: strategic philanthropy, nonprofit management, and international development. We then develop a contingency framework for measuring results, suggesting that some organizations should measure long-term impacts, while others should focus on shorter-term outputs and outcomes. In closing, we discuss the implications of our analysis for future research on performance management.”

Smart Tools: For evaluating information projects, products and services

Produced by CTA, KIT, IICD. 2nd (2009) edition

PDF version available online

“About the Toolkit

The Smart Toolkit focuses on the evaluation of information projects, products and services from a learning perspective. It looks at evaluation within the context of the overall project cycle, from project planning and implementation to monitoring, evaluation and impact assessment, and then at the evaluation process itself, the tools involved and examples of their application.The theme running throughout the toolkit is:

Participatory evaluation for learning and impact Continue reading “Smart Tools: For evaluating information projects, products and services”

EVALUATING DEVELOPMENT CO-OPERATION: SUMMARY OF KEY NORMS AND STANDARDS. SECOND EDITION

OECD DAC NETWORK ON DEVELOPMENT EVALUATION, February 2010 Download a pdf copy

“The DAC Network on Development Evaluation is a unique international forum that brings together evaluation managers and specialists from development co-operation agencies in OECD member countries and multilateral development institutions. Its goal is to increase the effectiveness of international development programmes by supporting robust, informed and independent evaluation.

A key component of the Network’s mission is to develop internationally agreed norms and standards to strengthen evaluation policy and practice. Shared standards contribute to harmonised approaches in line with the commitments of the Paris Declaration on Aid Effectiveness. The body of norms and standards is based on experience, and evolves over time to fit the changing aid environment. These principles serve as an international reference point, guiding efforts to improve development results through high quality evaluation.

The norms and standards summarised here should be applied discerningly and adapted carefully to fit the purpose, object and context of each evaluation. This summary document is not an exhaustive evaluation manual. Readers are encouraged to refer to the complete texts available on the DAC Network on Development Evaluation’s website: www.oecd.org/dac/evaluationnetwork. Several of the texts are also available in other languages.”

DEVELOPMENT EVALUATION RESOURCES AND SYSTEMS – A STUDY OF NETWORK MEMBERS

The DAC Network on Development Evaluation,  OECD, 2010. Download a pdf copy

“Introduction

In June 2009, the Organisation for Economic Co-operation and Development (OECD) Development Assistance Committee (DAC) Network on Development Evaluation agreed to undertake a study of its members’ evaluation systems and resources. The study aims to take stock of how the evaluation function is managed and resourced in development agencies and to identify major trends and current challenges in development evaluation. The purpose is to inform efforts to strengthen evaluation systems in order to contribute to improved accountability and better development results. It will be of interest to DAC members and evaluation experts, as well as to development actors in emerging donor and partner countries.

To capture a broad view of how evaluation works in development agencies, core elements of the evaluation function are covered, including: the mandate for central evaluation units, the institutional position of evaluation, evaluation funding and human resources, independence of the evaluation process, quality assurance mechanisms, co-ordination with other donors and partner countries, systems to facilitate the use of evaluation findings and support to partner country capacity development.

This report covers the member agencies of the OECD DAC Network on Development Evaluation.1 See Box 1 for a full list of member agencies and abbreviations. Covering all major bilateral providers of development assistance and seven important multilateral development banks, the present analysis therefore provides a comprehensive view of current policy and practice in the evaluation of development assistance.

The study is split into two sections: section I contains an analysis of overall trends and general practices, drawing on past work of the DAC and its normative work on development evaluation. Section II provides an individual factual profile for each member agency, highlighting its institutional set-up and resources.”

Measuring Empowerment? Ask Them

Quantifying qualitative outcomes from people’s own analysis. Insights for results-based management from the experience of a social movement in Bangladesh Dee Jupp Sohel Ibn Ali with contribution from Carlos Barahona 2010: Sida Studies in Evaluation. Download pdf

Preamble

Participation has been widely taken up as an essential element of development, but participation for what purpose? Many feel that its acceptance, which has extended to even the most conventional of institutions such as the international development banks, has resulted in it losing its teeth in terms of the original ideology of being able to empower those living in poverty and to challenge power relations.

The more recent emergence of the rights-based approach discourse has the potential to restore the ‘bite’ to participation and to re-politicise development. Enshrined in universal declarations and conventions, it offers a palatable route to accommodating radicalism and creating conditions for emancipatory and transformational change, particularly for people living in poverty. But an internet search on how to measure the impact of these approaches yields a disappointing harvest of experience. There is a proliferation of debate on the origins and processes, the motivations and pitfalls of rights-based programming but little on how to know when or if it works. The discourse is messy and confusing and leads many to hold up their hands in despair and declare that outcomes are intangible, contextual, individual, behavioural, relational and fundamentally un-quantifiable!

As a consequence, results-based management pundits are resorting to substantive measurement of products, services and goods which demonstrate outputs and rely on perception studies to measure outcomes.

However, there is another way. Quantitative analyses of qualitative assessments of outcomes and impacts can be undertaken with relative ease and at low cost. It is possible to measure what many regard as unmeasurable.

This publication suggests that steps in the process of attainment of rights and the process of empowerment are easy to identify and measure for those active in the struggle to achieve them. It is our etic perspectives that make the whole thing difficult. When we apply normative frames of reference, we inevitably impose our values and our notions of democracy and citizen engagement rather than embracing people’s own context-based experience of empowerment.

This paper presents the experience of one social movement in Bangladesh, which managed to find a way to measure empowerment by letting the members themselves explain what benefits they acquired from the Movement and by developing a means to measure change over time. These measures , which are primarily of use to the members, have then been subjected to numerical analysis outside of the village environment to provide convincing quantitative data, which satisfies the demands of results-based management.

The paper is aimed primarily at those who are excited by the possibilities of rights-based approaches but who are concerned about proving that their investment results in measurable and attributable change. The experience described here should build confidence that transparency, rigour and reliability can be assured in community led approaches to monitoring and evaluation without distorting the original purpose, which is a system of reflection for the community members themselves. Hopefully, the reader will feel empowered to challenge the sceptics.

Dee Jupp and Sohel Ibn Ali
Continue reading “Measuring Empowerment? Ask Them”

Guidance on Terms of Reference for an Evaluation: A List

This is the beginning of a new page that will list various sources of guidance on the development of Terms of Reference for an evaluation.

If you have suggestions for any additions (or edits) to this list please use the Comment function below.

Please also see the hundreds of examples of actual ToRs (and related docs) in the MandE NEWS Jobs Forum

PS: Jim Rugh has advised me (5 June 2010) that “two colleagues at the Evaluation Center at Western Michigan University are undertaking an extensive review of RFPs / ToRs they’ve seen posted on various listservs; they intend to publish a synthesis, critique and recommendations for criteria to make them more realistic and appropriate.