Evaluating Peacebuilding Activities in Settings of Conflict and Fragility: Improving Learning for Results

DAC Guidelines and Reference Series

Publication Date :08 Nov 2012
Pages :88
ISBN :9789264106802 (PDF) ; 9789264106796 (print)
DOI :10.1787/9789264106802-en

Abstract

Recognising a need for better, tailored approaches to learning and accountability in conflict settings, the Development Assistance Committee (DAC) launched an initiative to develop guidance on evaluating conflict prevention and peacebuilding activities.  The objective of this process has been to help improve evaluation practice and thereby support the broader community of experts and implementing organisations to enhance the quality of conflict prevention and peacebuilding interventions. It also seeks to guide policy makers, field and desk officers, and country partners towards a better understanding of the role and utility of evaluations. The guidance  presented in this book provides background on key policy issues affecting donor engagement in settings of conflict and fragility and introduces some of the challenges to evaluation particular to these settings. It then provides step-by-step guidance on the core steps in planning, carrying out and learning from evaluation, as well as some basic principles on programme design and management.

Table of Contents

Foreword
Acknowledgements

Executive summary

Glossary

Introduction: Why guidance on evaluating donor engagement in situations of conflict and fragility?

Chapter 1. Conceptual background and the need for improved approaches in situations of conflict and fragility

Chapter 2. Addressing challenges of evaluation in situations of conflict and fragility

Chapter 3. Preparing an evaluation in situations of conflict and fragility

Chapter 4. Conducting an evaluation in situations of conflict and fragility

Annex A. Conflict analysis and its use in evaluation

Annex B. Understanding and evaluating theories of change

Annex C. Sample terms of reference for a conflict evaluation

Bibliography

 

Measuring Impact: Lessons from the MCC for the Broader Impact Evaluation Community

William Savedoff and Christina Droggitis, Centre for Global Development, Aug 2011. Available as pdf (2 pages)

Excerpt:

“One organization that has taken the need for impact evaluation seriously is the Millennium Challenge Corporation. The first of the MCC programs came to a close this fiscal year, and in the next year the impact evaluations associated with them will begin to be published.

Politicians’ responses to the new wave of evaluations will set a precedent, either one that values transparency and encourages aid agencies to be public about what they are learning or one that punishes transparency and encourages agencies to hide findings or simply cease commissioning evaluations.”

Learning how to learn: eight lessons for impact evaluations that make a difference

ODI Background Notes, April 2011. Authors: Ben Ramalingam

This Background Note outlines key lessons on impact evaluations, utilisation-focused evaluations and evidence-based policy. While methodological pluralism is seen as the key to effective impact evaluation in development, the emphasis here is not methods per se. Instead, the focus is on the range of factors and issues that need to be considered for impact evaluations to be used in policy and practice – regardless of the method employed. This Note synthesises research by ODI, ALNAP, 3ie and others to outline eight key lessons for consideration by all of those with an interest in impact evaluation and aid effectiveness”.  8 pages

The 8 lessons:
Lesson 1:  Understand the key stakeholders
Lesson 2:  Adapt the incentives
Lesson 3:  Invest in capacities and skills
Lesson 4:  Define  impact  in ways  that  relate  to  the specific context
Lesson 5:  Develop the right blend of methodologies
Lesson 6:  Involve those who matter in the decisions that matter
Lesson 7:  Communicate effectively
Lesson 8:  Be persistent and lexible

See also Ben’s Thursday, April 14, 2011 blog posting: When will we learn how to learn?

[RD comments on this paper]

1.     The case for equal respect for different methodologies can be overstated. I feel this is the case when Ben argues that “First, it has been shown that the knowledge that results from any type of particular impact evaluation methodology is no more rigorous or widely applicable than the results from any other kind of methodology.”  While it is important that evaluation results affect subsequent policy and practice their adoption and use is not the only outcome measure for evaluations. We also want those evaluation results have some reliability and validity, that will stand the test of time and be generalisable to other settings with some confidence. An evaluation could affect policy and practice without necessarily being good quality , defined in terms of reliability and valdity.

  • Nevertheless, I like Ben’s caution about focusing too much on evaluations as outputs and the need to focus more on outcomes, the use and uptake of evaluations.

    2.     The section of Ben’s paper that most attracted my interest was the story about the Joint Evaluation of Emergency Assistance to Rwanda, and how the evaluation team managed to ensure it became “one of the most influential evaluations in the aid sector”. We need more case studies of these kinds of events and then a systematic review of those case studies.

    3.     When I read statements various like this: “As well as a supply of credible evidence, effort needs to be made to understand the demand for evidence” I have an image in my mind of evaluators as humble supplicants, at the doorsteps of the high and mighty. Isn’t it about time that evaluators turned around and started demanding that policy makers disclose the evidence base of their existing policies? As I am sure has been said by others before, when you look around there does not seem to be much evidence of evidence based policy making. Norms and expectations need to be built up, and then there may be more interest in what evaluations have to say. A more assertive and questioning posture is needed.

    Learning in Development

    Olivier Serrat, Asian Development Bank, 2010

    “Learning in Development tells the story of independent evaluation in ADB—from its early years to the expansion of activities under a broader mandate—points up the application of knowledge management to sense-making, and brings to light the contribution that knowledge audits can make to organizational learning. It identifies the 10 challenges that ADB must overcome to develop as a learning organization and specifies practicable next steps to conquer each. The messages of Learning in Development will echo outside ADB and appeal to the development community and people having interest in knowledge and learning.”

    Contents

    The CES Learning and Innovation Prize is open for entries.

    Closing date 17 January 2011

    Charities Evaluation Services is celebrating the ways in which charities use monitoring information or evaluation findings to improve their work and influence others with the new Learning and Innovation Prize.

    The Prize is aimed at highlighting the contribution that monitoring and evaluation makes to improving service delivery, not just accountability, and rewarding organisations who make the best use of the information that they have.

    Please note: the deadline for entries is 5pm, 17th January 2011.

    Prize categories

    This inspiring new award is split into four categories:

    • small charities (annual turnover under £500,000)
    • large charities (annual turnover over £500,000)
    • funders
    • organisations that support other charities.

    Who can enter

    Organisations that fit one of the above categories and who have used monitoring information or evaluation findings to improve their work and influence others can enter. For further information and specific criteria, please see the Entry Guidelines below.

    We are looking for situations where monitoring or evaluation was done by the organisation itself or where an external evaluator was involved. Winners will be expected to demonstrate evidence that the findings changed something about project or service delivery or use of their resources, or influenced others to do so.

    For more information and to download an entry form visit: http://www.ces-vol.org.uk/prize

    Charities Evaluation Services (CES) is the UK’s leading provider of training, consultancy and information on evaluation and quality systems in the third sector. We also publish PQASSO, the most widely used quality system in the sector.

    CES is an independent charity. We work with third sector organisations and their funders.

    RESOURCE PACK ON SYSTEMATIZATION OF EXPERIENCES

    ActionAid International, 2009, 104 pages.  Available as pdf 3.39Mb

    See also the associated AAI website on systematization

    Systematization is a methodology that offers a way to do all of the above. It allows us to:

    • Organise and document what we have learnt through our work
    • Better understand the impact of our work and the ways in which change happens
    • Develop deeper understanding about our work and the challenges we face to inform new ways of working
    • Capture and communicate the complexity and richness of our work
    Systematization “helps people involved in different kinds of practice to organize and communicate what they have learned. We are talking about …so called …. lessons learned, about which everybody talks nowadays, but are not so easy to produce.” (AAI systematization resource pack, pg. 1, 2009)”

    ActionAid reports on “systematization”

    From the ActionAid website

    “Systematization is the reconstruction of and analytical reflection about an experience. Through systematization, events are interpreted in order to understand them… The systematization allows for the experience to be discussed and compared with other similar experiences, and with existing theories and, thus, contributes to an accumulation of knowledge produced from and for practice” (Systematization Permanent Workshop in AAI systematization resource pack, pg 10, 2009).

    “In 2009, IASL has produced two excellent resources on systematization. The first is a resource pack, which is one of the few English language resources on this exciting methodology. The pack will inform you about the methodology, and give you a detailed orientation to how to systematize experiences. You will also find links to other systematization resources and examples, and an existing bibliography of systematization materials”

    “The second resource is Advocacy for Change, a systematization of advocacy experiences related to the status of youth (in Guatemala), the right to education (in Brazil) and farming (in the United States). The systematizations allowed the actors involved to consider the evolution of the experiences and to identify lessons and insights for future interventions. The Guatemala systematization product was documented in writing and film, the US experience in writing, and the Brazil experience in film”

    Identifiying and documenting “Lessons Learned”: A list of references

    Editor’s note:

    This is a very provisional list of documents on the subject of Lessons Learned, what they are, and how to identify and document them. If you have other documents that you think should be included in this list, please make a comment below.

    Note: This is not a list of references on the wider topic of learning, or on the contents of the Lessons Learned.

    2014

    • EVALUATION LESSONS LEARNED AND EMERGING GOOD PRACTICES. ILO Guidance Note No.3, April 2014. April 25, 2014 “The purpose of this guidance note is to provide background on definitions and usages of lessons learned applied by the ILO Evaluation Unit. Intended users of this guidance note are evaluation managers and any staff in project design or technically backstopping the evaluation process. There is separate guidance provided for  consultants on how to identify, formulate and present these findings in reports”

    2012

    2011

    • The NATO Lessons Learned Handbook. Second Edition, September 2011 “Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.” – Douglas Adams, author of The Hitchhiker’s Guide to the Galaxy”

    2009

    2007

    • Lessons Learned from Evaluation M. J. Spilsbury, C. Perch, S. Norgbey, G. Rauniyar and C.Battaglino Special Study Paper Number 2 A Platform for Sharing Knowledge. United Nations Environment Programme. January,2007. Lessons presented in evaluation reports are often of highly variable quality and limited utility. They are “often platitudes borne of a felt need to demonstrate engagement in the ‘knowledge society’ or simply to satisfy the specified evaluation requirements”. Even where high quality lessons are developed, they are seldom communicated effectively to their intended audiences. In order to enhance the quality of lessons, improve their utilisation, and aid their dissemination and communication, a Framework of Lessons from evaluation is presented in this paper. The framework consists of common problems, issues and or constraints to which evaluation lessons relate using ‘Mind- mapping’ software and ‘problem tree’ techniques. Evaluation lessons were systematically classified within the resulting Framework of Lessons. The proposed framework of evaluation lessons is best used within the context of interactive ‘face-to-face’ communication with project / programme managers to ensure that evaluation lessons truly become ‘lessons learned’.

    2005

    2004

    • Criteria for Lessons Learned (LL) A Presentation for the 4th Annual CMMI Technology Conference and User Group , by  Thomas R. Cowles Raytheon Space and Airborne Systems Tuesday, November 16, 2004

    2001

    • M. Q. Patton (2001) Evaluation, Knowledge Management, Best Practices, and High Quality Lessons Learned American Journal of Evaluation, 22(3), 2001. Abstract:  Discusses lessons to be learned from evaluation and best practices in evaluation and some ways to bring increased rigor to evaluators’ use of those terms. Suggests that “best” practices is a term to avoid, with “better” or “effective” being more realistic, and calls for more specificity when discussing lessons to be derived. (full text not yet found on line)

    1997

    If you know of other relevant documents and web pages, please tell us, by using the Comment facility below

    %d bloggers like this: