Oxfam study of MONITORING, EVALUATION AND LEARNING IN NGO ADVOCACY

Findings from Comparative Policy Advocacy MEL Review Project

by Jim Coe and Juliette Majot | February 2013. Oxfam and ODI

Executive Summary & Full text available as pdf

“For organizations committed to social change, advocacy often figures as a crucial strategic element. How to assess effectiveness in advocacy is, therefore, important. The usefulness of Monitoring, Evaluation and Learning (MEL) in advocacy are subject to much current debate. Advocacy staff, MEL professionals, senior managers, the funding community, and stakeholders of all kinds are searching for ways to improve practices – and thus their odds of success – in complex and contested advocacy environments. This study considers what a selection of leading advocacy organizations are doing in practice. We set out to identify existing practice and emergent trends in advocacy-related MEL practice, to explore current challenges and innovations. The study presents perceptions of how MEL contributes to advocacy effectiveness, and reviews the resources and structures dedicated to MEL.

This inquiry was initiated, funded and managed by Oxfam America. The Overseas Development Institute (ODI) served an advisory role to the core project team, which included Gabrielle Watson of Oxfam America, and consultants Juliette Majot and Jim Coe. The following organizations participated in the inquiry:ActionAid International | Amnesty International | Bread for the World | CARE,USA |Greenpeace International | ONE | Oxfam America | Oxfam Great Britain | Sierra Club”

Learning how to learn: eight lessons for impact evaluations that make a difference

ODI Background Notes, April 2011. Authors: Ben Ramalingam

This Background Note outlines key lessons on impact evaluations, utilisation-focused evaluations and evidence-based policy. While methodological pluralism is seen as the key to effective impact evaluation in development, the emphasis here is not methods per se. Instead, the focus is on the range of factors and issues that need to be considered for impact evaluations to be used in policy and practice – regardless of the method employed. This Note synthesises research by ODI, ALNAP, 3ie and others to outline eight key lessons for consideration by all of those with an interest in impact evaluation and aid effectiveness”.  8 pages

The 8 lessons:
Lesson 1:  Understand the key stakeholders
Lesson 2:  Adapt the incentives
Lesson 3:  Invest in capacities and skills
Lesson 4:  Define  impact  in ways  that  relate  to  the specific context
Lesson 5:  Develop the right blend of methodologies
Lesson 6:  Involve those who matter in the decisions that matter
Lesson 7:  Communicate effectively
Lesson 8:  Be persistent and lexible

See also Ben’s Thursday, April 14, 2011 blog posting: When will we learn how to learn?

[RD comments on this paper]

1.     The case for equal respect for different methodologies can be overstated. I feel this is the case when Ben argues that “First, it has been shown that the knowledge that results from any type of particular impact evaluation methodology is no more rigorous or widely applicable than the results from any other kind of methodology.”  While it is important that evaluation results affect subsequent policy and practice their adoption and use is not the only outcome measure for evaluations. We also want those evaluation results have some reliability and validity, that will stand the test of time and be generalisable to other settings with some confidence. An evaluation could affect policy and practice without necessarily being good quality , defined in terms of reliability and valdity.

  • Nevertheless, I like Ben’s caution about focusing too much on evaluations as outputs and the need to focus more on outcomes, the use and uptake of evaluations.

    2.     The section of Ben’s paper that most attracted my interest was the story about the Joint Evaluation of Emergency Assistance to Rwanda, and how the evaluation team managed to ensure it became “one of the most influential evaluations in the aid sector”. We need more case studies of these kinds of events and then a systematic review of those case studies.

    3.     When I read statements various like this: “As well as a supply of credible evidence, effort needs to be made to understand the demand for evidence” I have an image in my mind of evaluators as humble supplicants, at the doorsteps of the high and mighty. Isn’t it about time that evaluators turned around and started demanding that policy makers disclose the evidence base of their existing policies? As I am sure has been said by others before, when you look around there does not seem to be much evidence of evidence based policy making. Norms and expectations need to be built up, and then there may be more interest in what evaluations have to say. A more assertive and questioning posture is needed.

    A guide to monitoring and evaluating policy influence

    ODI Background Notes, February 2011. 12 pages
    Authors: Harry Jones
    “This paper provides an overview of approaches to monitoring and evaluating policy influence and is intended as a guide, outlining challenges and approaches and suggested further reading.”

    “Summary: Influencing policy is a central part of much international development work. Donor agencies, for example, must engage in policy dialogue if they channel funds through budget support, to try to ensure that their money is well-spent. Civil society organisations are moving from service delivery to advocacy in order to secure more sustainable, widespread change. And there is an increasing recognition that researchers need to engage with policy-makers if their work is to have wider public value.

    Monitoring and evaluation (M&E), a central tool to manage interventions, improve practice and ensure accountability, is highly challenging in these contexts. Policy change is a highly complex process shaped by a multitude of interacting forces and actors. ‘Outright success’, in terms of achieving specific, hoped-for changes is rare, and the work that does influence policy is often unique and rarely repeated or replicated, with many incentives working against the sharing of ‘good practice’.

    This paper provides an overview of approaches to monitoring and evaluating policy influence, based on an exploratory review of the literature and selected interviews with expert informants, as well as ongoing discussions and advisory projects for policy-makers and practitioners who also face the challenges of monitoring and evaluation. There are a number of lessons that can be learned, and tools that can be used, that provide workable solutions to these challenges. While there is a vast breadth of activities that aim to influence policy, and a great deal of variety in theory and practice according to each different area or type of organisation, there are also some clear similarities and common lessons.

    Rather than providing a systematic review of practice, this paper is intended as a guide to the topic, outlining different challenges and approaches, with some suggestions for further reading.”

    %d bloggers like this: