The Evaluation of the Paris Declaration: Phase II Report

“After the landmark international Paris Declaration on Aid Effectiveness, endorsed in 2005, what have been the improvements in the quality of global aid and its effects on development? And how can a political declaration, implemented across widely varying national contexts, be robustly evaluated?

An independent global evaluation – a fully joint study involving countries and donor agencies – has assessed these efforts to improve the effectiveness of international aid, especially since 2005. The evaluation is the largest aid evaluation ever conducted. It has been a major international effort in itself, comprising more than 50 studies in 22 partner countries and across 18 donor agencies, as well as several studies on special themes, over a period of four years. It has broken some new boundaries in the study of complex evaluation objects.

The study had the unusual object of a political declaration, implemented across very diverse national environments to varied timescales. Its methodology rejected a traditional linear approach to cause and effect, emphasising instead the importance of context. It opted to draw out the programme theory of the Declaration itself (and that of the Accra Agenda for Action) and their statement of intent. Recognising the limits of aid in development, it applied contribution analysis to assess whether and how the commitments, actors and incentives brought together by the Declaration and the Accra Agenda have delivered on their intentions. The methodology traces the logic of how the Declaration is supposed to work and illustrates the complex pathways from development objectives to results. Recognising that development is a journey, it focuses on assessing the direction of travel on each key point, and the pace and distance travelled so far.

The study concludes that the global campaign to make international aid programmes more effective is showing results, giving the best hope in half a century that aid can be better used to help developing countries raise their economic and living standards. Improvements are slow and uneven in most developing countries, however, and even more so among donor countries and aid agencies. The Evaluation report, all the component studies and the Technical Annex – which describes the methodology and process – can be found at www.busanhlf4.org and www.oecd.org/dac/evaluationnetwork/pde. The second phase of the study was managed by UK-based development consultancy IOD PARC.”

For more information email // julia@iodparc.com  IOD PARC, 16-26 Forth Street. Edinburgh EH1 3LH

RD comment: Of additional interest ” Given the importance of the Evaluation of the Paris Declaration, the Management Group commissioned an independent assessment – a meta evaluation – of the evaluation process and outcome to determine whether the evaluation meets generally accepted standards of quality and to identify strengths, weaknesses, and lessons. The Report, by Michael Quinn Patton and Jean Gornick, can be downloaded here” “Evaluation of the Phase 2 Evaluation of the Paris Declaration”

 

Measuring Results: A GSDRC Topic Guide

Available as linked pages on the Governance and Social Development Resource Centre (GSDRC), website as of August 2011

The guide is designed to provide a quick and easy way for development professionals to keep in touch with key debates and critical issues in the field of monitoring and evaluation. It will be updated on a quarterly basis.

About this guide
“How can the impact of governance and social development programmes be assessed with a view to improving their efficiency and effectiveness? What particular challenges are involved in monitoring and evaluating development interventions, and how can these be addressed? How can the ‘value for money’ of a particular intervention be determined?

Monitoring and evaluation (M&E) is vital to ensuring that lessons are learned in terms of what works, what does not, and why. M&E serves two main functions: 1) it builds accountability by demonstrating good use of public funds; and 2) it supports learning by contributing to knowledge about how and why programmes lead to intended (or unintended) outcomes. There can sometimes be a tension between these functions.

This guide introduces some of the core debates and considerations for development practitioners involved in designing and managing M&E activities. It introduces key tools and approaches, provides case studies of applying different methodological approaches, and presents lessons learned from international experience of M&E in a range of developing country contexts. While the guide focuses on M&E for governance and social development programmes, it has relevance for all programmes.

The guide was originally prepared by Claire Mcloughlin, and a comprehensive update was undertaken by Oliver Walton in July 2011. The GSDRC appreciates the contributions of Claire Vallings and Lina Payne (DFID) and Hugh Waddington and colleagues at 3ie. Comments, questions or documents for consideration should be sent to enquiries@gsdrc.org.”

RCTs for empowerment and accountability programmes

A GSDRC Helpdesk Research Report, Date: 01.04.2011, 14 pages, available as pdf.

Query: To what extent have randomised control trials been used to successfully measure the results of empowerment and accountability processes or programmes?
Enquirer: DFID
Helpdesk response
Key findings: This report examines the extent to which RCTs have been used successfully to measure empowerment and accountability processes and programmes. Field experiments present immense opportunities, but the report cautions that they are more suited to measuring short-term results with short causal chains and less suitable for complex interventions. The studies have also demonstrated divergent results, possibly due to different programme designs. The literature highlights that issues of scale, context, complexity, timeframe, coordination and bias in the selection of programmes also determine the degree of success reported. It argues that researchers using RCTs should make more effort to understand contextual issues, consider how experiments can be scaled up to measure higher-order processes, and focus more on learning. The report suggests strategies such as using qualitative methods, replicating studies in different contexts and using randomised methods with field activities to overcome the limitations in the literature.
Contents
1. Overview
2. General Literature (annotated bibliography)
3. Accountability Studies (annotated bibliography)
4. Empowerment Studies (annotated bibliography)

 

Cultural cognition and the problem of science communication

[also titled “Cultural dissensus over scientific consensus”]

These are the titles of a very interesting 50 minute presentation by Dan Kahan of the Yale Law School, available here on YouTube. It is part of a wider body of work by The Cultural Cogition Project, also at the Yale Law School.

It is about how what might be described as some core cultural values affect people’s attitudes towards evidence, both new evidence and perceptions of where the consensus lays in regard to existing evidence, in relation to a number of fields of scientific inquiry which have been the subject of some public debate. It is very relevant to evaluation because it could be argued that at the heart of much evaluation work is  “rationalist”  theory of change, that if people are presented with evidence about what works, where and when  and how, then they will adjust their policies and practices in the light of those findings. The findings presented by Dan Kahan suggest otherwise, quite dramatically. Fortunately, he also touches on some ways forward, about how to deal with the problems his work has raised.

“Its a deliberative climate that needs environmental protection, just as much as the physical environment, and providing it as a kind of public good…So this is a science of science communication, to create conditions in which people, the likelihood, of converging on the scientific truth has no connection to these kinds of values , how to do that is kind of complicated, but I do want to start by appealing to you that is the kind of goal we should have”

There is also an associated paper, available as pdf: “Cultural Cognition of Scientific Consensus” by Dan M. Kahan,Hank Jenkins-Smith, Donald Braman, Journal of Risk Research, Vol. 14, pp. 147-74, 2011, Yale Law School, Public Law Working Paper No. 205

A comic’al perspective on methodology issues

Courtesy of XKCD

As the saying goes, “If you torture the data for long enough, it will tell you what you want to hear”

On the risks of data mining, and the management of those risks, see Walking the talk: the need for a trial registry for development interventions, also on this site.

DPC Policy Discussion Paper: Evaluating Influencing Strategies and Interventions

A paper to the DFID Development Policy Committee. Available as pdf  June 2011

Introduction
“1 The Strategy Unit brief of April 2008 envisaged that DFID should become more systematic in planning and implementing influencing efforts. Since then, procedures and guidance have been developed and there is an increasingly explicit use of influencing objectives in project log frames and more projectisation of influencing efforts. Evaluation studies and reports have illustrated the wide variety of DFID influencing efforts and the range of ambition and resources involved in trying to generate positive changes in the aid system or in partner countries. These suggest that being clear and realistic about DFID’s influencing objectives, the stakeholders involved and the specific changes being sought, is the fundamental requirement for an effective intervention. It is also the basis for sound monitoring and evaluation.
2 To support this initiative, the Evaluation Department organised a series of workshops in 2009 and 2010 to further develop the measurement and evaluation of influencing interventions producing a draft How to Note with reference to multilateral organisations in September 2010. However, with the changes to DFID’s corporate landscape in 2010 and early 2011 this work was put on hold pending the conclusion of some key corporate pieces of work .
3. An increase in demand for guidance is also noted given the changing external environment. DFID is now positioning itself to address the demands of the changing global aid landscape with new initiatives, such as the Global Development Partnerships programme. This has a relatively small spend, however its success will be measured largely by the depth and reach of its influence.
4. The Evaluation Department is now seeking guidance on how important the Development Policy Committee considers the evaluation of influencing interventions, and the direction in which it would like this developed.
5. This Paper sets out why evaluation of influencing interventions is important, why now, key theories of change and an influencing typology, value for money of an influencing intervention and metrics, and finally , the challenges of measuring influence.”

See also the associated “Proposed Influencing Typology”

The paper also refers to “Appraising, Measuring and Monitoring Influencing: How Can DFID Improve?” by the DFID Strategy Unit April 2008, which does not seem to be available on the web.

RD Comment: I understand that this is considered as a draft document and that comments on it would be welcomed. Please feel free to make your comments below

ISO International Workshop Agreement (IWA) on Evaluation Capacity Development

Date: 17-21 October 2011
Venue: John Knox Centre, Geneva, Switzerland

Dear Colleagues:

A proposal prepared by the Evaluation Capacity Development Group (ECDG) and the Joint Committee on Standards for Educational Evaluation (JCSEE), in partnership with the International Organization for Cooperation in Evaluation (IOCE), to create an International Workshop Agreement (IWA) on evaluation capacity development (ECD) was recently approved by the International Organization for Standardization (ISO).

Everyone agrees that there is an acute need to develop evaluation capacity. However, resolution of the problem has not been possible because there is no agreement on HOW to develop evaluation capacity. Some think that individual evaluators should be better trained through workshops and seminars.  Others think that organizations should be redesigned to enable the achievement of a shared vision for evaluation. And, yet others think that evaluation should be institutionalized in national governments to promote accountability to their citizens.

We are now organizing a workshop that will be held 17-21 October 2011 at the John Knox Centre, Geneva, Switzerland.  The workshop will use a systems approach to develop an IWA that integrates ECD at the individual, organizational and national levels.  I am particularly pleased to inform you that a leading expert in systems-based evaluation, Bob Williams, has consented to facilitate the event.

As per the procedures explained in Annex SI of the Supplement to the ISO/IEC Directives, ANY organization with an interest in evaluation capacity development can register to send a representative to the workshop to participate in the preparation of this important document. Limited support may be available.  To learn more about the workshop and to register please go to http://www.ecdg.net/

Best Regards,

Karen Russon
President
Evaluation Capacity Development Group

Micro-Methods in Evaluating Governance Interventions

This paper is available as a pdf.  It should be cited as follows: Garcia, M. (2011): Micro-Methods in Evaluating Governance Interventions. Evaluation Working Papers. Bonn: Bundesministerium für wirtschaftliche Zusammenarbeit und Entwicklung.

The aim of this paper is to present a guide to impact evaluation methodologies currently used in the field of governance. It provides an overview of a range of evaluation techniques – focusing specifically on experimental and quasi-experimental designs. It also discusses some of the difficulties associated with the evaluation of governance programmes and makes suggestions with the aid of examples from other sectors. Although it is far from being a review of the literature on all governance interventions where rigorous impact evaluation has been applied, it nevertheless seeks to illustrate the potential for conducting such analyses.

This paper has been produced by Melody Garcia, economist at the German Development Institute (Deutsches Institut für Entwicklungspolitik, DIE). It is a part of a two-year research project on methodological issues related to evaluating budget support funded by the BMZ’s evaluation division. The larger aim of the project is to contribute to the academic debate on methods of policy evaluation and to the development of a sound and theoretically grounded approach to evaluation. Further studies are envisaged.

Improving Peacebuilding Evaluation A Whole-of-Field Approach

by Andrew Blum, June 2011. United States Institute of Peace,  Available as pdf Found courtesy of @poverty_action

Summary

  • The effective evaluation of peacebuilding programs is essential if the field is to learn what constitutes effective and ineffective practice and to hold organizations accountable for using good practice and avoiding bad practice.
  • In the field of peacebuilding evaluation, good progress has been made on the intellectual front. There are now clear guidelines, frameworks, and tool kits to guide practitioners who wish to initiate an evaluation process within the peacebuilding field.
  • Despite this, progress in improving peacebuilding evaluation itself has slowed over the past several years. The cause of this is a set of interlocking problems in the way the peacebuilding field is organized. These in turn create systemic problems that hinder effective evaluation and the utilization of evaluation results.
  • The Peacebuilding Evaluation Project, organized by USIP and the Alliance for Peacebuilding, brought funders and implementers together to work on solutions to the systemic problems in peacebuilding work. This report discusses these solutions, which are grouped into three categories: building consensus, strengthening norms, and disrupting practice and creating alternatives. Several initiatives in each of these categories are already under way.

About the Report

In May 2010, the Alliance for Peacebuilding in collaboration with the United States Institute of Peace launched the Peacebuilding Evaluation Project. Over the course of a year, the project held a series of four meetings in Washington, DC. The goal of the project was to foster collaboration among funders, implementers, and policymakers to improve evaluation practice in the peacebuilding field. This report is inspired by the deep and far-ranging conversations that took place at the meetings. Its central argument is that whole-of-field approaches designed to address systemic challenges are necessary if the practice of peacebuilding evaluation is to progress.

http://apture.com/?ref=hotspotsbetahgvh7

Connecting communities? A review of World Vision’s use of MSC

A report for World Vision, by Rick Davies and Tracey Delaney, Cambridge and Melbourne, March 2011. Available as pdf

Background to this review

“This review was undertaken by two monitoring and evaluation consultants, both with prior experience in the use of the Most Significant Change (MSC) technique. The review was commissioned by World Vision UK, with funding support from World Vision Canada. The consultants have been asked to “focus on what has and has not worked relating to the implementation and piloting of MSC and why; establish if the MSC tools were helpful to communities that used them; will suggest ideas for consideration on how MSC could be implemented in an integrated way given WV’s structure, systems and sponsorship approach; and what the structural, systems and staffing implications of those suggestions might be”. The review was undertaken in February-March 2011 using a mix of field visits (WV India and Cambodia), online surveys, Skype interviews, and document reviews.

MSC is now being used, in one form or another, in many WV National Offices (NOs). Fifteen countries using MSC were identified through document searches, interviews and an online survey, and other users may exist that did not come to our attention. Three of these countries have participated in a planned and systematic introduction of MSC as part of WV’s Transformational Development Communications (TDC) project; namely Cambodia, India and the Philippines.  Almost all of this use has emerged in the last four years, which is a very brief period of time. The ways in which MSC has been used varies widely, some of which we would call MSC in name only. Most notably, where the MSC question is being used, but where there is no subsequent selection process of MSC stories. Across almost all the users of MSC that we made contact with there was a positive view of the value of the MSC process and the stories can produce. There is clearly a basis here for improving the way MSC is used within WV, and possibly widening the scale of its use. However, it is important to bear in mind that our views are based on a largely self-selected sample of respondents, from 18 of the 45 countries we sought to engage.”

Contents

Glossary. 4
1.      Executive Summary. 5

1.1 Background to this review.. 5

1.2 Overview of how MSC is being used in WV. 5

1.3 The findings: perceptions and outcomes of using MSC. 6

1.4 Recommendations emerging from this review.. 7

1.5 Concluding comment about the use of MSC within WV. 12

2.      Review purpose and methods. 13

2.1 World Vision expectations. 13

2.2 Review approach and methods. 13

2.3 The limitations of this review.. 14

3.      A quick summary of the use of MSC by World Vision.. 15

4.      How MSC has been used in World Vision.. 17

4.1 Objectives: Why MSC was being used. 17

4.2 Processes: How MSC was being used. 18

Management 18

Training. 19

Domains of change. 19

Story collection. 20

A review of some stories documented in WV reports. 22

Story selection. 24

Verification. 26

Feedback. 26

Quantification. 27

Secondary analysis. 27

Use of MSC stories. 28

Integration with other WV NO and SO functions. 29

4.3 Outcomes: Experiences and Impacts. 30

Evaluations of the use of MSC. 30

Experiences of MSC stories. 30

Who benefits. 31

Impacts on policies and practices. 31

Summary assessments of the strengths and weaknesses of using MSC. 32

5.      How MSC has been introduced and used in TDC countries. 36

5.1 Objectives: Why MSC was being used. 36

5.2 Process in TDC: a comparison across countries. 36

Management and coordination of MSC process. 36

Training and support 37

Use of domains. 39

Story collection. 39

Story Selection. 43

Feedback on MSC stories. 46

Use of MSC stories. 47

Role out of TDC pilot – extending the use of MSC to all ADPs. 49

Integration and/or adoption of MSC into other sections of the NO.. 50

5.3 The outcomes of using MSC in the TDC. 51

Experiences and reactions to MSC. 51

Who has benefited and how.. 52

5.4 Conclusions about the TDC pilot. 55

 

 

%d bloggers like this: