Next Generation Network Evaluation

Paper published June 2010. Produced by Innovations for Scaling Impact and Keystone Accountability. Funded by the International Development Research Center and the Packard Foundation. (Download pdf version here)

“Purpose: This paper reviews the current field of network monitoring and evaluation with the goal of identifying where progress has been made and where further work is still needed. It proposes a framework for network impacts planning, assessment, reporting and learning that can help to close some of the current gaps in network evaluation while building on the advances that have been made. This document is written for practitioners undertaking network evaluation and foundation program staff working to support networks. Continue reading “Next Generation Network Evaluation”

Beyond Logframe: Using Systems Concepts in Evaluation

March 2010. Nobuko Fujita (Ed)  Foundation for Advanced Studies on International Development (FASID) Available as pdf

“Editor’s Note: The 2010 Issues and Prospects of Evaluations for International Development employs systems concepts as clues to re-assess the conventional ways of conducting evaluations and to explore how development evaluation can potentially be made more useful.

In Japan, development evaluation predominantly relies on the Logical Framework (logframe) when conducting evaluations. Evaluations based on a logframe often face difficulties. One such difficulty arises from the futile attempt to develop an evaluation framework based on a logframe, which, in many cases, was prepared as part of the early-stage planning of the project and which then does not necessarily reflect a project’s real situation at the time of evaluation. Although a logframe can be utilised initially as a tentative project plan, logframes are rarely revised even when the situation has changed. By the end of the project, the original logframe may not be an accurate embodiment of what the project is about and therefore logframes do not particularly help in terminal or ex-post evaluations.

Still, having been institutionalized by clients, logframe-based evaluations are common practice and in extreme cases, evaluators face the danger of evaluating the logframe instead of the actual project. Although widely used for its simplicity, logframes can end up becoming a cumbersome tool, or even a hindrance to evaluation.

Various attempts have been made to overcome the limitations of the logframe and some aid organizations such as USAID, UNDP, CIDA and the World Bank have shifted from the logframe to Results-Based Management (RBM). Now GTZ  is in the process of shifting to a new project management approach designed on RBM and systems ideas.

In the first article, “Beyond logframe: Critique, Variations and Alternatives,” Richard Hummelbrunner, an evaluator/consultant from Austria, sums up the critique of logframe and the Logical Framework Approach (LFA), and explores some variations employed to overcome specific shortcomings of LFA. He then outlines a systemic alternative to logframe  and introduces the new GTZ management model for sustainable development called “Capacity WORKS.” Richard has dealt with LFA and possible alternatives to LFA at various points along his career, and he is currently involved in GTZ’s rollout of Capacity WORKS as it becomes the standard management model for all BMZ 5 projects and programmes.

What does he mean by “systemic alternative”? In the second article, “Systems Thinking and Capacity Development in the International Arena,” Bob Williams, a consultant and an expert in systems concepts, explains what “thinking systemically” is about and how it might help evaluation. He boils down systems ideas into three core concepts (inter-relationships, perspectives, and boundaries), and relates these concepts to various systems methods.

In December 2009, FASID offered a training course and a seminar on this topic in Tokyo. Through the exchange of numerous e-mails with the instructors prior to the seminar, it occurred to me that the concepts might be more easily understood presented as a conversation. That is what we tried to do in the third article, “Using Systems Concepts in Evaluation – A Dialogue with Patricia Rogers and Bob Williams –.” These two instructors of the FASID training course and workshop explain in simple conversational style where and how we can start applying systems concepts in development evaluation.

This issue also carries a report of two collaborative evaluations of Japanese Official Development Assistance (ODA) projects. The first case presents an innovative joint evaluation conducted collaboratively with Vietnamese stakeholders. The evaluation took place in 2009 – 2010 as the last year of a three-year evaluation capacity development project coordinated by the Japan International Cooperation Agency. The second case covers a joint evaluation study of another Japanese ODA project in Lao PDR with a local Lao administration for which neither logframe nor OECD DAC five criteria was used. Instead, an evaluation framework was developed from scratch, based entirely on the beneficiaries’ interests and perspectives. In both cases, a partner country’s participation in the evaluation necessitated considerable changes in perspectives of evaluation practice. I hope they provide examples of how boundaries and perspectives, as discussed theoretically in the first three articles, relate to development evaluation in practice.”

The Global Evaluation Conclave: Making Evaluation matter

Date: 25-28 October 2010
Venue: The Lalit Hotel , New Delhi, India

“Making Evaluation Matter” is the theme of the conclave. The theme embodies an idea of evaluation that starts with relevance and context, with a strong understanding of who evaluation should serve.

The event will attract global thinkers engaged in cutting edge evaluation research, theorizing, or practice who seek opportunities to push their thinking in new directions and are interested in applying ideas in a South Asian context. It will include leading development theorists, activists and policy makers from South Asia to embed discussions in current development issues and contexts that evaluation must respond to.

The event will be a space to engage with, and test knowledge. The conclave will include around 200 leaders from the global development and evaluation community.

50 + speakers, 24+ hours of knowledge building and networking, 4 power packed days, 1 Great venue, Countless new evaluation opportunities.

Programme schedule | Read workshop description
Click here to download registration form

“Full transparency and new independent watchdog will give UK taxpayers value for money in aid”

Copied from the DFID website, 3rd June 2010:

[Please post your Comments below and/or on the Guardian Katine website ]

“British taxpayers will see exactly how and where overseas aid money is being spent and a new independent watchdog will help ensure this aid is good value for money, International Development Secretary Andrew Mitchell has announced.

In his first major speech as Development Secretary, Mr Mitchell said he had taken the key steps towards creating an independent aid watchdog to ensure value for money. He also announced a new UKaid Transparency Guarantee to ensure that full information on all DFID’s spending is published on the departmental website.

The information will also be made available to the people who benefit from aid funding: communities and families living in the world’s poorest countries.

These moves come as part of a wider drive to refocus DFID’s work so British taxpayers’ money is spent transparently and on key priority issues such as maternal mortality and disease prevention.”

In Mr Mitchell’s speech, delivered at the Royal Society with Oxfam and Policy Exchange, he argued that overseas aid is both morally right and in Britain’s national interest but that taxpayers need to see more evidence their money is being spent well. Continue reading ““Full transparency and new independent watchdog will give UK taxpayers value for money in aid””

“Impact 2.0: Collaborative technologies connecting research and policy”

“Impact 2.0: Collaborative technologies connecting research and policy” is a two-year research project that seeks to develop a body of knowledge about the use of Web 2.0 in policy-oriented research and design in Latin America and to identify, document and promote good practices and emerging opportunities related to the use of collaborative technologies for linking research to policy.

In order to achieve this goal Impact 2.0 has two components. The first involves three pilot projects that seek to combine to combine current theory on the relationship between research, policy and advocacy with advances in Web 2.0/Social networking technologies and practices. The second is a fund to support research into the use of Web 2.0 tools and behaviours to link research and policy.

The research fund will consider two types of proposals: Type 1 projects will involve both implementing a specific intervention using Web 2.0 to link policy and analyzing, documentating and evaluating the intervention while Type 2 projects will document and evaluate one or more current or recent projects making use of Web 2.0 to link policy and research. A maximum of US$15,000 is available for Type 1 projects and US$7,500 for Type 2.

Independent researchers and organisations (universities, government agencies, NGOs, research centres) are invited to apply. Applicants must reside in Latin America and the projects to be performed and analyzed must also be located in and relevant to the region.

Full details are available in the attached CFPs or in Spanish at http://impacto2.comunica.org and in English at http://impacto2.comunica.org/?page_id=23

Impacto 2.0 is a project of Fundaci�n Comunica, with the financial support of the International Development Research Centre (IDRC) and the participation of APC, DIRSI, PRODIC of the University of the Republic of Uruguay and CIESPAL.


| Bruce Girard | www.comunica.org |
| tel: +598 2 410.2979 | mobile: +598 99 189.652 |
| Dr. Pablo de Mar�a 1036 | Montevideo, Uruguay |

How Wide are the Ripples?

Report of the March 2010 workshop Prepared by Louise Clark, Kate Newman and Hannah Beardon

“This report presents reflections from a workshop held in London on 18th and 19th March 2010. The workshop was part of a larger process of reflection and research, supported by IKM Emergent and called ‘How Wide Are the Ripples?’. The process explored how international development NGOs use and manage the information, knowledge and perspectives generated through the participatory processes they initiate or fund. The initial research and report built on a literature review and case studies from five international NGOs (ActionAid, Concern, Healthlink, Panos and Plan), identifying challenges and opportunities to good bottom-up information and learning flows. The workshop invited participants from the original research and others working on and around these issues to reflect further on the challenges and discuss practical solutions based on their own experiences.

Participants came from a mix of large international development NGOs, smaller organisations and included independent consultants . The variety of organisations was not only evident in their size, but also in their different structures and relationships with grassroots processes and organisations, a recurring theme throughout the discussions. The expectation, and commitment from the participants, was that these discussions and experiences would feed into a guest-edited edition of the IIED journal Participatory Learning and Action (PLA), in June 2011. The workshop was therefore organised around two main goals: improving the practice of international development NGOs in relation to information generated through participatory processes, through workshop discussions and by developing a network for support and sharing ideas; and promoting further reflection and learning around specific issues, in particular through developing articles for the edition of PLA.”

Research Integration Using Dialogue Methods

David McDonald, Gabriele Bammer, Peter Deane, 2009 Download pdf

Ed: Although about “research integration”  the book is also very relevant to the planning and evaluation of development projects

“Research on real-world problems—like restoration of wetlands, the needs of the elderly, effective disaster response and the future of the airline industry—requires expert knowledge from a range of disciplines, as well as from stakeholders affected by the problem and those in a position to do something about it. This book charts new territory in taking a systematic approach to research integration using dialogue methods to bring together multiple perspectives. It links specific dialogue methods to particular research integration tasks.

Fourteen dialogue methods for research integration are classified into two groups:

1. Dialogue methods for understanding a problem broadly: integrating judgements

2. Dialogue methods for understanding particular aspects of a problem: integrating visions, world views, interests and values.

The methods are illustrated by case studies from four research areas: the environment, public health, security and technological innovation.”

Stories vs. Statistics: The Impact of Anecdotal Data on Accounting Decision Making

James Wainberg , Thomas Kida, James F. Smith
March 12, 2010  Download pdf copy

Abstract:
Prior research in psychology and communications suggests that decision makers are biased by anecdotal data, even in the presence of more informative statistical data. A bias for anecdotal data can have significant implications for accounting decision making since judgments are often made when both statistical and anecdotal data are present. We conduct experiments in two different accounting contexts (i.e., managerial accounting and auditing) to investigate whether accounting decision makers are unduly influenced by anecdotal data in the presence of superior, and contradictory, statistical data. Our results suggest that accounting decision makers ignored or underweighted statistical data in favor of anecdotal data, leading to suboptimal decisions. In addition, we investigate whether two decision aids, judgment orientation and counterargument, help to mitigate the effects of this anecdotal bias. The results indicate that both decision aids can reduce the influence of anecdotal data in accounting decision contexts. The implications of these results for decision making in accounting and auditing are discussed.

Using impact evaluation to improve development

Date: 04 May 2010 17:30-19:00 GMT+1 (BST)
Venue: Overseas Development Institute, London (directions)

Speakers:
Ariel Fiszbein – Chief Economist, Human Development Network, World Bank
Professor Costas Meghir – Co-Director, ESRC Research Centre, Institute of Fiscal Studies
Chair:
Alison Evans – Director, ODI

How do you improve development effectiveness through the better use of evidence? Impact evaluation can build the solid evidence base on what works in development and, in turn, improve development policy. Ariel Fiszbein, Chief Economist in the World Bank’s Human Development Network will discuss how the Bank uses impact evaluation to inform policies on health, education and social protection. His presentation will be followed by a comment from Professor Costas Meghir, Co-Director, ESRC Research Centre.

Register a place | Register to watch online

NZAID 2008 Evaluations and Reviews: Annual Report on Quality, 2009

Prepared by Miranda Cahn, Evaluation Advisor, Strategy, Advisory and Evaluation Group, NZAID, Wellington, August 2009. Available online

Executive Summary

Introduction

The New Zealand Agency for International Development (NZAID) is committed to improving evaluative activity1, including evaluations and reviews. Since 2005 NZAID has undertaken annual desk studies of the evaluations and reviews completed by NZAID during the previous calendar year. This 2009 study assesses the quality of 29 NZAID commissioned evaluations and reviews that were submitted to the NZAID Evaluation and Review Committee (ERC) during 2008, and their associated Terms of Reference (TOR). The study identifies areas where quality is of a high standard, and areas where improvement is needed. Recommendations are made on how improvements to NZAID commissioned evaluations and reviews could be facilitated.

The objectives of the study are to:

• assess the quality of the TOR with reference to the NZAID Guidelines on Developing TOR for Reviews and Evaluations

• assess the quality of the NZAID 2008 evaluation and review with reference to the NZAID Evaluation Policy, relevant NZAID Guidelines and Development Assistance Committee of Organisation for Economic Cooperation and Development (DAC) Evaluation Quality Standards

• identify, describe and discuss key quality aspects of the TOR and evaluation and review reports that were of a high standard and those that should be improved in future. Continue reading “NZAID 2008 Evaluations and Reviews: Annual Report on Quality, 2009”