Pan Africa-Asia Results-Based M&E Forum, Bangkok, Nov 2012 – Presentations now available

The 2012 Pan Africa-Asia Results-Based M&E Forum

Bangkok November 26-28

Sixteen presentations over three days  listed and available online here.

Monday 26 November, 2012

Dr John Mayne, Independent Public Sector Performance Adviser,  ”Making Causal Claims” (9.15 – 10.15am)

Jennifer Mullowney, Senior RBM&E Specialist, CIDA.  ”How to Apply Results-Based Management in Fragile States and Situations: Challenges, Constraints, and Way Forward”. (10.15 to 10.45 am)

Shakeel Mahmood, Coordinator Strategic Planning & M&E, ICDDR.  Strategies for Capacity Building for Health Research in Bangladesh: Role of Core Funding and a Common Monitoring and Evaluation Framework”.  (11.30 -12 noon)

Troy Stremler, CEO, Newdea Inc“Social Sector Trends”  ( 1.40 – 2.40 pm)

Dr Carroll Patterson, Co-founder, SoChaFrom M&E to Social Change: Implementation Imperatives.” 2.40 – 3.10 pm

Susan Davis, Executive Director, Improve International and Marla-Smith-Nilson, Executive Director, Water 1st International), “A Novel Way to Promote Accountability in WASH: Results from First Water & Sanitation Accountability Forum & Plans for the Future. (3.55 – 4.25 pm)

 

 Tuesday 27 November, 2012

Sanjay Saxena, Director, TSCPL Director, M&E/MIS System Consultant.  “Challenges in Implementing M&E systems for Reform Programs.”  (9.15 – 10.15 am)

Chung Lai, Senior M&E Officer, International Relief and Development.  “ Using Data Flow Diagrams in Data Management Processes (demonstration)” (10.15 – 10.45 pm)

Korapin Tohtubtiang, International Livestock Research Institute, Thailand, Lessons Learned from Outcome Mapping in an IDRC Eco-Health Project.”   (11.30 – 12 noon)

Dr Paul Duignan of  DoView, International Outcomes and Evaluation Specialist. “Anyone Else Think the Way We Do Our M&E Work is Too Cumbersome and Painful? Using DoView Visual Strategic Planning & Success Tracking M&E Software – Simplifying, Streamlining and Speeding up Planning, Monitoring and Evaluation” (1.40 – 2.40 pm)

Ahmed Ali, M&E Specialist, FATA Secretariat, Multi-Donor Trust Fund & the World Bank. The Sub-national M&E Systems of the Government of Khyber Pakhtunkhwa and FATA – the Case Study of M&E Multi-donor Trust Fund Projects.” 2.40 – 3.10 pm

Global Health Access Program (GHAP) Backpack Health Worker Teams, Thailand, Cross-border M&E of Health Programs Targeting Internally Displaced Persons (IDPs) in Conflict-affected Regions of Eastern Burma (3.55 – 4.25 pm)

 

Wednesday 28 November, 2012

Dr .V. Rengarajan (Independent M&E & Micro-financing Consultant, Author, & Researcher). What is Needed is an Integrated Approach to M&E.” (9.15 – 10.15 am)

Dr Lesley Williams,  Independent M&E & Capacity-building Consultant, Learn MandE. “Value for Money (VfM): an Introduction.” (10.15 – 10.45 am)

Eugenia Boutylkova (Program Officer, PSO, Holland) and Jan Van Ongevalle (Research Manager, HIVA/KULeuven, Belgium). Thematic Learning Program (TLP): Dealing with Complexity through Planning, Monitoring & Evaluation (PME) (11.30 – 12 noon)

Catharina Maria. Does the Absence of Conflict Indicate a Successful Peace-building Project? (1.40 – 2.40 pm)

 

Evaluating Peacebuilding Activities in Settings of Conflict and Fragility: Improving Learning for Results

DAC Guidelines and Reference Series

Publication Date :08 Nov 2012
Pages :88
ISBN :9789264106802 (PDF) ; 9789264106796 (print)
DOI :10.1787/9789264106802-en

Abstract

Recognising a need for better, tailored approaches to learning and accountability in conflict settings, the Development Assistance Committee (DAC) launched an initiative to develop guidance on evaluating conflict prevention and peacebuilding activities.  The objective of this process has been to help improve evaluation practice and thereby support the broader community of experts and implementing organisations to enhance the quality of conflict prevention and peacebuilding interventions. It also seeks to guide policy makers, field and desk officers, and country partners towards a better understanding of the role and utility of evaluations. The guidance  presented in this book provides background on key policy issues affecting donor engagement in settings of conflict and fragility and introduces some of the challenges to evaluation particular to these settings. It then provides step-by-step guidance on the core steps in planning, carrying out and learning from evaluation, as well as some basic principles on programme design and management.

Table of Contents

Foreword
Acknowledgements

Executive summary

Glossary

Introduction: Why guidance on evaluating donor engagement in situations of conflict and fragility?

Chapter 1. Conceptual background and the need for improved approaches in situations of conflict and fragility

Chapter 2. Addressing challenges of evaluation in situations of conflict and fragility

Chapter 3. Preparing an evaluation in situations of conflict and fragility

Chapter 4. Conducting an evaluation in situations of conflict and fragility

Annex A. Conflict analysis and its use in evaluation

Annex B. Understanding and evaluating theories of change

Annex C. Sample terms of reference for a conflict evaluation

Bibliography

 

Measuring Results: A GSDRC Topic Guide

Available as linked pages on the Governance and Social Development Resource Centre (GSDRC), website as of August 2011

The guide is designed to provide a quick and easy way for development professionals to keep in touch with key debates and critical issues in the field of monitoring and evaluation. It will be updated on a quarterly basis.

About this guide
“How can the impact of governance and social development programmes be assessed with a view to improving their efficiency and effectiveness? What particular challenges are involved in monitoring and evaluating development interventions, and how can these be addressed? How can the ‘value for money’ of a particular intervention be determined?

Monitoring and evaluation (M&E) is vital to ensuring that lessons are learned in terms of what works, what does not, and why. M&E serves two main functions: 1) it builds accountability by demonstrating good use of public funds; and 2) it supports learning by contributing to knowledge about how and why programmes lead to intended (or unintended) outcomes. There can sometimes be a tension between these functions.

This guide introduces some of the core debates and considerations for development practitioners involved in designing and managing M&E activities. It introduces key tools and approaches, provides case studies of applying different methodological approaches, and presents lessons learned from international experience of M&E in a range of developing country contexts. While the guide focuses on M&E for governance and social development programmes, it has relevance for all programmes.

The guide was originally prepared by Claire Mcloughlin, and a comprehensive update was undertaken by Oliver Walton in July 2011. The GSDRC appreciates the contributions of Claire Vallings and Lina Payne (DFID) and Hugh Waddington and colleagues at 3ie. Comments, questions or documents for consideration should be sent to enquiries@gsdrc.org.”

A results take-over of aid effectiveness? How to balance multiple or competing calls for more accountability

Date: 25 July 2011 12:00-13:30 (GMT+01 (BST))
Venue: British Academy, London

This debate will explore possible tensions – and opportunities – when donors seek to reassure domestic publics that aid is being spent well, while also endeavouring to support the needs and priorities of aid recipient countries and their citizens.

The language of results is not new – it is integral to the aid effectiveness agenda. But against the backdrop of growing financial constraints, it is receiving renewed emphasis in many donor countries. This debate will explore possible tensions, as well as opportunities, where donors seek to reassure domestic publics that aid is being spent well while they also endeavour to support the needs and priorities of aid recipient countries and their citizens. How can domestic accountability to both these constituencies be supported more effectively? Are there tensions between these different stakeholders and forms of accountability, and how can they be addressed?

Speakers:
Sarah Cliffe – Special Representative and Director, World Development Report 2011: Conflict, Security, and Development
Sue Unsworth – The Policy Practice, and ODI Board Member
Alan Hudson – Senior Policy Manager, Governance (Transparency & Accountability), ONE
John Morlu – former Auditor General, Liberia
Chair:  Alison Evans – Director, ODI

An ODI and BBC World Service Trust public event in the Busan and beyond: aid effectiveness in a new era series.

Click for more details           Register to attend this event

Released: Australian Government’s response to the Independent Review of Aid Effectiveness

The ‘Independent Review of Aid Effectiveness’ and the Government’s response were released on 6 July 2011 by Foreign Minister Kevin Rudd, in an official launch at Parliament House, followed by a Ministerial Statement to Parliament. For an overview, see this page on the AusAID website

Independent Review of Aid Effectiveness:

Commissioned in November 2010, this was the first independent review of the aid program in 15 years. It made 39 recommendations to improve the program

Australian Government response:

The Government  has agreed (or agreed in principle) to 38 of the recommendations. Including that the agency develop a three-tiered results framework for reporting on agency-wide performance.

See also

RD Comment: The following section on Independent Evaluation is of particular interest [underlining added]:

ii) Independent Evaluations

“AusAID’s Independent Completion Reports and Independent Progress Reports are another key part of its Performance Management and Evaluation Policy.

Under current guidelines, a report must be completed for an activity every four years, either during its implementation (a progress report) or at completion (a completion report). Reports are required for projects above $3 million and are meant to be made public. They are independent in that they are done by individuals not involved in the project. Typically, but not always, they are written by non–AusAID staff.

By international standards, this policy is thorough. For example, at the World Bank, independent completion reports are done only for a sample of projects

But a study of AusAID evaluation reports commissioned by the Review Panel found that implementation of AusAID’s evaluation policy is patchy:
• Of 547 projects that should have had a completion or progress report in 2006–10, only 170 were recorded as having been done.
• Of the 170, only 118 could be found.
• About 26 per cent of the completion and progress reports were assessed to be too low quality to publish.
Only about 20 have been published on the AusAID website.

Clearly, the policy is not being fully followed. Other problems were also evident. None of the 118 completion or progress reports reviewed provided an unsatisfactory rating. This raises questions of credibility. In comparison, 20 per cent of World Bank projects are rated unsatisfactory by its independent evaluation group.

There is also a structural issue with the policy: AusAID program managers must approve the publication of an independent report. This risks conflicts of interest and long delays in publication. The low rate of publication suggests these problems may be occurring.

Independent completion reports, when done and published, can be very useful. For example, the completion report on the first phase of the Indonesia Basic Education Project is in the public domain and helped to inform recent public debate about the second phase of the project (AusAID 2010b). In contrast, several useful completion reports have recently been done for the PNG program, but only one has been released.

Given the problems described above, it is not surprising that the Review Panel has seen little evidence that these reports inform and improve aid delivery.

Measuring Results for Dutch Development Aid, Approaches and Future Directions

Date: October 4-7, 2010
Venue: Royal Tropical Institute, Amsterdam,

The International Institute of Social Studies and The Amsterdam Institute for International Development invite applications / submissions for a training and conference event on Measuring Results for Dutch Development Aid, Approaches and Future Directions with financial support from the Policy and Operations Evaluation Department of the Dutch Ministry of Development Cooperation.

Participation is free of charge, but places are limited.
Deadline for applications: September 10, 2010
Click here to apply

Objectives: Share results from and experiences with impact evaluation in developing countries, and discuss their relevance for Dutch development cooperation.

Target Audiences: Researchers, NGOs, consulting companies and policy makers in the Netherlands conducting or using impact evaluation to study the effectiveness of development assistance.

Confirmed speakers: Dr. Howard White, director of International Initiative for Impact Evaluation(3ie).
Dr. Paul Gertler, Professor of Economics, University of California, Berkeley.
Dr. Sulley Gariba, Executive Director, Institute for Policy Alternatives, Ghana.
Prof. Ruerd Ruben, director of the Policy and Operations Evaluation Department of the Dutch Ministry of Development Cooperation (starting Sept 1).

Submit a paper (optional): Contributed papers are sought in the area of (1) completed impact evaluations, (2) reviews of impact evaluations on a particular sector, (3) position papers on approaches to impact evaluations in relation to decision making.

Selection criteria: Quality of submission and/or professional link with result assessment for development assistance and/or participation in the impact evaluation training.

Maximum number of participants: 100

PROGRAM »

%d bloggers like this: