A brief summary and links to IDEAS global assembly held in Jo-Burg, March 2009

provided by Denis Jobin (IDEAS VP  2006-2009), in a posting on the MandE NEWS email list…

Getting to Results: Evaluation Capacity Building and Development
The IDEAS Global Assembly, Birchwood Hotel, Johannesburg, South Africa, March 17-20, 2009.

The International Development Evaluation Association‘s Global Assembly focused on the issues involved in evaluation capacity building, and how such efforts can strengthen the evidence available to countries to inform their own development. Capacity building has been recognized for a decade or more as crucial to development. The measurement (and management) issues embedded in generating and disseminating evaluative information are now understood to be critical to informing decision making. The Global Assembly explored these topics with the objective of clarifying present knowledge on evaluation capacity building, learning lessons from development evaluation experience, and understanding the challenges faced by development evaluators in taking these efforts forward.

The theme of the global assembly underscores the role that evaluative knowledge can play in development in general, and the importance of building and sustaining the capacity to bring evaluative knowledge into the decision making process so as to enhance the achievement of results.

The papers presented at the global assembly may be grouped according to several “themes”.We have provided below the links to those papers which we have been able to make available.

Building Evaluation Capacity in Response to the Paris Declaration and the Accra Agenda for Action

This strand focuses on the commitments made in the Paris Declaration and the Accra Agenda for Action to strengthen monitoring and evaluation systems in order to track development performance. Documents

Institutional capacity building

When the focus is on institutional capacity building, issues of supply versus demand in the public, private, and NGO/CSO sectors of society are immediately apparent. The role of evaluation associations, standards for evaluation performance, the role of credentials, national evaluation policies, and incentives for quality evaluations are all significant issues. Documents

Regional Responses/Regional Strategies for building Evaluation Capacity

This topic examines regional efforts and strategies being deployed to strengthen evaluation capacity. Documents

Country / Sector Specific Responses for Building Evaluation Capacity

Case Studies from a range of countries and organizations provide insight into different ways of building Capacity. Documents

Evaluation Capacity Building — Tools, Techniques, and Strategies

Capacity building involves multiple tools, techniques, and strategies. This topic examines the success (or not) of these different components of capacity building.  Documents

The Measurement and Assessment of Evaluation Capacity Building

This topic examines the experiences and the efforts made to actually evaluate capacity building. The assessments describe and analyze the methodological choices and their implications. Qualitative, quantitative methods and mixed methods are analysed, and any unintended consequences are examined.  Documents

Country-Led Evaluation

A series of case studies analyse and evaluate Monitoring and Evaluation efforts in a range of countries. Documents

Metaevaluation revisited, by Michael Scriven

An Editorial in Journal of MultiDisciplinary Evaluation, Volume 5, Number 11, January 2009

In this short and readable paper Michael Scriven addresses “three categories of issues that arise about meta-evaluation: (i) exactly what is it; (ii) how is it justified; (iii) when and how should it be used? In the following, I say something about all three—definition, justification, and application.” He then makes seven main points, each of which he elaborates on in some detail:

  1. Meta-evaluation is the consultant’s version of peer review.
  2. Meta-evaluation is the proof that evaluators believe what they say.
  3. In meta-evaluation, as in  all evaluation, check the pulse before trimming the nails.
  4. A partial meta-evaluation is better than none.
  5. Make the most of meta-evaluation.
  6. Any systematic approach to evaluation—in other words, almost any kind of professional evaluation—automatically provides a systematic basis for meta-evaluation.
  7. Fundamentally, meta-evaluation, like evaluation, is simply an extension of common sense—and that’s the first defense to use against the suggestion that it’s some kind of fancy academic embellishment.

Impact Evaluation of Population, Health and Nutrition Programs

Date: October 5 – 16, 2009
Venue:  Public Health Foundation of India, New Delhi, India

USAID’s MEASURE Evaluation Project is pleased to announce the regional workshop on “Impact Evaluation of Population, Health and Nutrition Programs,” for English speaking professionals. The workshop is sponsored by the Public Health Foundation of India (PHFI), New Delhi, India in collaboration with MEASURE Evaluation. The two-week course will be held October 5 – 16, 2009 in New Delhi, India.
Continue reading “Impact Evaluation of Population, Health and Nutrition Programs”

A Regional Workshop on Monitoring and Evaluation of HIV/AIDS Programs

Date: August 3-14, 2009
Venue: Pretoria, South Africa

USAID’s MEASURE Evaluation Project is pleased to announce a training opportunity for the Anglophone Africa region. The School of Health Systems and Public Health at University of Pretoria in Pretoria, South Africa is offering a regional workshop on Monitoring and Evaluation of HIV/AIDS Programs. This two-week course will take place August 3 – 14, 2009, and will be taught in English.
Continue reading “A Regional Workshop on Monitoring and Evaluation of HIV/AIDS Programs”

IFAD “Evaluation Manual: Methodology and Processes.

produced by the Office of Evaluation, April 2009

“The evaluation methodology in use at the IFAD Office of Evaluation is captured in the new OE Evaluation manual, which is based on the principles set out in IFAD’s evaluation policy, approved by IFAD’s Executive Board in April 2003.

The main purpose of the manual is to ensure consistency, rigour and transparency across independent evaluations, and enhance OE’s effectiveness and quality of work.
Continue reading “IFAD “Evaluation Manual: Methodology and Processes.”

Quantification of qualitative data in the water sector: The challenges

by Christine Sijbesma and Leonie Postma

Published in Water International, Volume 33, Issue 2, June 2008 pp. 150-161 (Full text >here<)

Abstract

Participatory methods are increasingly used in water-related development and management. Most information gathered with such methods is qualitative. The general view is that such information cannot be aggregated and is therefore less interesting for managers. This paper shows that the opposite can be the case. It describes a participatory methodology that quantifies qualitative information for management at all levels. The methodology was developed to assess the sustainability of community-managed improved water supplies, sanitation and hygiene. It allows correlation of outcomes to processes of participation and gender and social equity and so assess where changes are needed. The paper also describes how elements of positivistic research such as sampling were included. Application in over 15 countries taught that such quantified qualitative methods are an important supplement to or an alternative for social surveys. While the new approach makes statistical analysis possible, it also increases the risk that participatory methods are used extractively when the demand for data on outcomes dominates over quality of process. As a result the collection of qualitative information and the use of the data for community action and adaptive management gets lost. However, when properly applied, quantification offers interesting new opportunities. It makes participatory methods more attractive to large programmes and gives practitioners and donors a new chance to adopt standards of rigor and ethics and so combine quantification with quality protection and developmental use.

a one day event designed to explore impact assessment …

Date: Thursday 2nd July 2009
Venue: London

EGO – Empowering Grassroots Organisations would like to invite you to…a one day event designed to explore impact assessment with leading speakers from the Third Sector

On Thursday 2nd July 2009 in London, empACT will take on three key areas; we will:

· explore the outcomes of impact assessment through sector wide funding and grassroots perspectives

· explore methods of assessment

· host workshops around proven methods and tools.

empACT will bring together speakers with wide-ranging expertise in the field to interrogate the real advantages and disadvantages of some of the methods explored.

Confirmed speakers include…

Sarah Alderson, Head of Projects – TimeBank

Isabel Ros Lopez, Inclusion Manager – United Response

Tanya Murphy, Independent Evaluation Practitioner

Samantha Beinhacker, Producer and Developer – Germination

For tickets: www.amiando.com/empACTconference or for further information please email maor@egonetwork.org.uk

Claremont Graduate University’s Evaluation Workshops Go On-Line

Date: August 21-2, 2009
Venue: Claremont Graduate University, Southern California

Professional Development Workshop Series

Claremont Graduate University is proud to offer our annual line-up of acclaimed workshops for professionals, academics, and students who seek to hone their research and evaluation skills.  Presenters from across North America will gather in beautiful Claremont, California, to teach participants from academia, research institutes and think tanks, for-profit and not-for-profit organizations, and the public sector.  As always, this year’s series offers several new workshops as well as long-standing favorites.

Each workshop lasts one full day.  Workshops in this series have consistently sold out in previous years, so save your seat in the workshops of your choice by registering online today.

Registration: On-Site Workshops

Registration: Online Workshops

The Claremont Evaluation Debates

Full List of Workshops

Undergraduate Fellowship Program

About the Online Offerings

Daily Schedule

Location, Directions, and Lodging

Workshops Photo Album

For the First Time, Summer 2009

We will be offering one workshop each day via live webcast in a state-of-the-art virtual classroom environment. The virtual classroom offerings will include interaction with the live presenter and participants. Anyone with a high-speed internet connection, computer speakers and a microphone is able to participate. Use of a webcam is encouraged for maximum interaction.

Follow this link to register for the online versions of our workshops!

The live webcast offerings will include:

Social Network Analysis And the Evaluation of Leadership Networks

Bruce Hoppe, Ph.D. Connective Associates LLC
bruce@connectiveassociates.com
Claire Reinelt, Ph.D. Leadership Learning Community
claire@leadershiplearning.org
January 19, 2009

Abstract
Leadership development practitioners have become increasingly interested in networks as a way to strengthen relationships among leaders in fields, communities, and organizations. This paper offers a framework for conceptualizing different types of leadership networks and uses case examples to identify outcomes typically associated with each type of network. One challenge for the field of leadership development has been how to evaluate leadership networks. Social Network Analysis (SNA) is a promising evaluation approach that uses mathematics and visualization to represent the structure of relationships between people, organizations, goals, interests, and other entities within a larger system. Core social network concepts are introduced and explained to illuminate the value of SNA as an evaluation and capacity-building tool.

Full text here

Vancouver Outcome Mapping Training Workshop – September 21-24

Date: September 21-24
Venue: Vancouver

Outcome Mapping is a practical, flexible and participatory approach to planning, monitoring and evaluation. First introduced by the International Development Research Centre (IDRC) in 2000, Outcome Mapping (OM) has been used in projects, programs and organizations around the world. A growing body of donor agencies, NGOs and monitoring and evaluation professionals are adopting OM because it helps them address complexity issues that other, more traditional methods do not consider.