Evaluation Essentials: Methods For Conducting Sound Research

ISBN: 978-0-7879-8439-7. Paperback.192 pages. July 2008, Jossey-Bass
Also available via Amazon

“The book introduces students and practitioners to all necessary concepts and tools used to evaluate programs and policies. It focuses on issues that arise when evaluating programs, using those offered by non-profit and governmental organizations serving different sectors (social services, health, education, social work) as case studies. The author gives the reader a solid background in the core concepts, theories, and methods of program evaluation. Readers will learn to form evaluation questions, describe programs using program theory and program logic models, understand causation as it relates to evaluation, and perform quasi-experimental design, grant writing, outcome measures, survey design, and sampling”

See Table of Contents.

M&E training to be provided by Development Alternatives, India

Date: June 05-07, 2008
Venue:

Monitoring, Evaluation and Learning System (MEAL)

Introduction

The principle of Monitor what You want to Manage has implications at two critical levels: one, that each monitoring system should be customized to the needs of the project; and second, the monitoring system should be robust and have the flexibility to adapt to changing capacities of the project over a period of time. Monitoring Evaluation and Learning System (MEAL) seeks to retain the basic principles of project management yet allow the project to identify, develop and evolve their own monitoring and review systems.

Continue reading “M&E training to be provided by Development Alternatives, India”

Basic Necessities Survey versus Schreiner’s Simple Poverty Scorecard

This page compares two simple methods of measuring poverty:

The comparison was prompted by an email to the MandE NEWS email list by Atta Ullah in May 2008 asking for a comparison.
Continue reading “Basic Necessities Survey versus Schreiner’s Simple Poverty Scorecard”

Planning, Monitoring and Evaluation in Development Organisations: Sharing training and facilitation experiences

Book by J. De Coninck, K. Chaturvedi, B. Haagsma, H. Griffioen, M. van der Glas

Paperback: 220 pages Publisher: SAGE India (31 May 2008) Language English ISBN-10: 8178298570 ISBN-13: 978-8178298573. Also available on Amazon

Effective planning, monitoring and evaluation (PME) is essential for organisational survival and for sustainable development, but remains a challenge for many development organisations, in spite of countless PME workshops, manuals and interventions of experts. This book presents a rich variety of real-life experiences of 20 PME trainers and facilitators from Africa, Asia, Latin America and Europe and offers suggestions to support PME processes, with a focus on civil society organisations. The authors seek to embrace a ‘total organisation approach’ to PME, one that looks at an organisation in its entirety, including its financial dimension, its environment, its collaborators and competitors, in a context informed by local and national cultures.

It looks at the implications of the specificities of organisations for their PME practice and systems, in relation to the role they play in society, as agents for transformation or providers of basic services, whether they are young small pioneers or big old and established organisations, assessing the quality of their leadership and learning processes, and finally, the sector they work in. The central message is the need to customise PME support to these specificities. It includes a section on the facilitators approach and attitudes.

The book is meant for use by PME facilitators and practitioners, whether they are working in NGOs, in other development organisations, or as desk officers in donor agencies. It does not pretend to prescribe solutions and pathways, but as a source of inspiration. A section on further reading guides the reader to sources of further reading, the facilitators have found useful.

Authors’ weblinks:

John de Coninck: www.crossculturalfoundation.or.ug

Ben Haagsma: IC Consult: www.icconsult.nl

Khilesh Chaturvedi: www.askindia.org

CDRN: www.cdrn.or.ug (this is the organisation that rendered much support in an initial phase)

ICCO: www.icco.nl

Cordaid: www.cordaid.nl

As to the other two authors: Mariecke is still employee of ICCO, though working in Nicaragua; Hans Griffioen is retired, no website, former colleague of IC Consult.

Systematic synthesis of community-based rehabilitation (CBR) project evaluation reports for evidence-based policy: a proof-of-concept study

Pim Kuipers, Sheila Wirz and Sally Hartley
Published: 6 March 2008
BMC International Health and Human Rights 2008, 8:3
Full text via this page > 1472-698X-8-3.pdf

Abstract
Background: This paper presents the methodology and findings from a proof-of-concept study
undertaken to explore the viability of conducting a systematic, largely qualitative synthesis of
evaluation reports emanating from Community Based Rehabilitation (CBR) projects in developing
countries.

Methods: Computer assisted thematic qualitative analysis was conducted on recommendation
sections from 37 evaluation reports, arising from 36 disability and development projects in 22
countries. Quantitative overviews and qualitative summaries of the data were developed.
Results: The methodology was found to be feasible and productive. Fifty-one themes were
identified and the most important ones of these are presented to illustrate the significance of the
method. The relative priorities of these themes indicated that “management” issues were the
primary areas in which recommendations were made. Further analysis of themes reflected the
emphasis evaluators placed on the need for enhanced management, organisational, personnel and
administrative infrastructure in CBR projects. Evaluators consistently recommended that CBR
projects should be more connected and collaborative at governmental, organisational, political and
community levels. The synthesis also noted that evaluators questioned the emphasis in CBR on
project expansion and income generation.

Conclusion: The application of the synthesis methodology utilised in this proof-of-concept study
was found to be potentially very beneficial for future research in CBR, and indeed in any area within
health services or international development in which evaluation reports rather than formal
“research evidence” is the primary source material. The proof-of-concept study identified a number
of limitations which are outlined. Based on the conclusions of 37 evaluation reports, future policy
frameworks and implementation strategies in CBR should include a stronger emphasis on technical,
organisational, administrative and personnel aspects of management and strategic leadership.

Round Table on Impact Evaluation: meeting and online

The two days gathering entittled Mapping the Measures of Success: An Expert Round Table on Impact Evaluation for Strengthening Governance of WASH Services has been officially opened. This gathering and online discussion will be held until Wednesday 14 of May -2008. IRC will send a hard copy of the Thematic Overview Paper produced using the inputs from this event to the active participants of the Online Discussion.

The purpose of the Round Table is to map the existing knowledge, practices, experiences and challenges related to evaluating the impact of development interventions in the context of governance of WASH services.

Topics

This event will bring together around 30 experts (by invitation) from different organisations and sectors (universities, (I)NGOs, water sector implementing agencies, resource centres as well as representatives of government and financial agencies) to share knowledge, experience and ideas through a round table platform. Through this facilitated learning and sharing event, the organisers seek to give voice to different perspectives on the following topics:

1. Why do we do impact evaluation? Who is the consumer of impact evaluation results and for what purpose do they need the information?
2. What are the main challenges when trying to identify and measure impact of an intervention focusing on development or institutional change?
3. What can be defined as meaningful ‘measures of success’ or indicators of a project that seeks to strengthen local governance?
4. What can be defined as suitable methods of a project that seeks to strengthen local governance?
5. What limitations and gaps exist in the current impact evaluation discourse?

Online discussion

In order to enable the participation of interested parties not in attendance, a facilitated discussion will also be conducted online through a dedicated blog. Through this it will be possible to comment on the ongoing discussion and access the materials and proceedings related to this event. For more information: Deirdre Casella, e-mail, or Sandra Segura, e-mail.

Advancing the Standards, Practice, and Use of Evaluation

Evaluation News, Operations Evaluation Department, Asian Development Bank, 7 May 2008

MANILA, PHILIPPINES – On 21–24 April 2008, the Evaluation Cooperation Group (ECG) met in Tunis. Its
working groups discussed the standards, practice, and use of evaluation vis-vis the private sector, country strategies and programs, technical assistance, and the public sector. They examined rating scales and criteria, the framework for the review of evaluation functions, and communications. A plenary session deliberated these and a workshop looked at public-private partnerships.

The ECG … includes the African Development Bank, ADB, the European Bank for Reconstruction and Development, the European Investment Bank, the Inter-American Development Bank, the International Fund Monetary , and the World Bank Group. The United Nations Evaluation Group and the Development Assistance Committee Working Group on Evaluation became permanent observer members in 2001 and were joined at the meet by three new observers and prospective members: they are the Council of Europe Development Bank, the International Fund for Agricultural Development, and the Islamic Development Bank.

Miradi – adaptive management software for conservation projects

(Referred to by Richard Margoluis, on Mande NEWS email list, 8 May 2008)

Miradi – a Swahili word meaning “project” or “goal” – is a user-friendly program that allows nature conservation practitioners to design, manage, monitor, and learn from their projects to more effectively meet their conservation goals. The program guides users through a series of step-by-step interview wizards, based on the Open Standards for the Practice of Conservation. As practitioners go through these steps, Miradi helps them to define their project scope, and design conceptual models and spatial maps of their project site. The software helps teams to prioritize threats, develop objectives and actions, and select monitoring indicators to assess the effectiveness of their strategies. Miradi also supports the development of workplans, budgets, and other tools to help practitioners implement and manage their project. Users can export Miradi project data to donor reports or, in the future, to a central database to share their information with other practitioners.

2008 Reader on Measuring and Reporting Results

(From the Donor Committee for Enterprise Development website)

Description

In the absence of much discussion on the subject, it remains rather sensitive, and one that people therefore try to avoid. Meanwhile, external pressures are growing, for more information; they are coming from donors (e.g. through the Paris Declaration, the MDG deadline), new players and aid models (e.g. social investors) and increased visibility (e.g. Live8). This Reader argues that practitioners need to seize the initiative and to develop answers, before someone else does it for them. In the absence of good data, critics will always be able to say: ‘if you cannot measure it, maybe it is not there’.

A brief overview is therefore given of current understanding in the field, including particularly the terms, indicators and methodologies in use. It is argued that multi-agency agreement in these areas would yield very important benefits, in addition to an approximate comparison of performance; for example:

– agencies could add impacts achieved across all of their country programmes, enabling them to report results for the agency as a whole;
– agencies would also be able to make informed choices about which intervention strategies to fund

Examples are given of impacts measured in a standard format, including for example cost per job created; since the resulting numbers are very different in magnitude, they make a rational conversation about strategy choice possible – even if they are only correct to within +/- 50%. Agreement now needs to be built around the key parameters for formulating these numbers, including for example the multipliers to use for indirect impacts.

Approximate measures do not replace the need for rigorous impact assessments. But agreement between agencies on a small number of indicators, and their application across a wide range of interventions, would win recognition for the achievements of the PSD community. Affordable mechanisms are needed, to ensure that the numbers produced are credible – for example through certification of the methodologies used.

Associated documents
» 2008 PSD Reader on Measuring and Reporting Results, by Jim Tanburn (650 kB)
» Espagnol: Documento de trabajo 2008 sobre le desarrollo del sector privado: Medicion e informe de resultados (1 Mb)
» Francais: Document de base 2008 sur le developpement du secteur prive: Quantifier et rapporter les resultats (902 kB)

Jim Tanburn is Coordinator of the Donor Committee for Enterprise Development ( www.enterprise-development.org) and also run a couple of inter-agency databases (www.Value-Chains.org and www.BusinessEnvironment.org).

Quality COMPAS and Dynamic COMPAS course organized by Groupe URD at its head office

(from Alnap email list)

Training Course (in English): The Dynamic COMPAS and the Quality COMPAS

(quality assurance method and software for humanitarian projects)

23 – 27 June 2008 in Plaisians (Provence)

The Quality COMPAS method and Dynamic COMPAS software are project and information management tools for humanitarian projects. They will be the subject of a short training course to be held in Plaisians (Groupe URD’s head office) from 23rd to 27th June 2008.

The course will be conducted in English.

Project and information management is essential to ensure the quality of humanitarian projects. Many of the weaknesses which have been identified in projects over the last decade do not come from a lack of technical knowledge on the part of humanitarian actors but rather because qualitative factors have not been properly taken into account.

Drawing its content from the COMPAS method, the course will cover subjects such as (1) conducting a situation analysis which goes further than a simple needs analysis, (2) designing a project beyond the logical framework, (3) defining objectives and indicators in keeping with all the quality criteria, (4) developing and implementing a monitoring system, (5) understanding the difference between monitoring and evaluation, etc.

The course is organised around a case study which gives participants practical experience of quality management and using the COMPAS method and the Dynamic COMPAS software (to download it and/or to know more about the COMPAS: http://www.compasqualite.org/en/index/index.php)

This course has been designed for national and international aid workers involved in project management activities like needs assessment, design, monitoring, self-evaluation and evaluation.

Do not hesitate to contact us for further information.

Pierre Brunet

Training Unit Coordinator

%d bloggers like this: