Oxfam study of MONITORING, EVALUATION AND LEARNING IN NGO ADVOCACY

Findings from Comparative Policy Advocacy MEL Review Project

by Jim Coe and Juliette Majot | February 2013. Oxfam and ODI

Executive Summary & Full text available as pdf

“For organizations committed to social change, advocacy often figures as a crucial strategic element. How to assess effectiveness in advocacy is, therefore, important. The usefulness of Monitoring, Evaluation and Learning (MEL) in advocacy are subject to much current debate. Advocacy staff, MEL professionals, senior managers, the funding community, and stakeholders of all kinds are searching for ways to improve practices – and thus their odds of success – in complex and contested advocacy environments. This study considers what a selection of leading advocacy organizations are doing in practice. We set out to identify existing practice and emergent trends in advocacy-related MEL practice, to explore current challenges and innovations. The study presents perceptions of how MEL contributes to advocacy effectiveness, and reviews the resources and structures dedicated to MEL.

This inquiry was initiated, funded and managed by Oxfam America. The Overseas Development Institute (ODI) served an advisory role to the core project team, which included Gabrielle Watson of Oxfam America, and consultants Juliette Majot and Jim Coe. The following organizations participated in the inquiry:ActionAid International | Amnesty International | Bread for the World | CARE,USA |Greenpeace International | ONE | Oxfam America | Oxfam Great Britain | Sierra Club”

Duggan & Bush on Evaluation in Settings Affected by Violent Conflict: What Difference Does Context Make?

From AEA365:| A Tip-a-Day by and for Evaluators. Posted: 08 Feb 2013 12:51 AM PST

“We are Colleen Duggan, Senior Evaluation Specialist, International Development Research Centre (Canada) and Kenneth Bush, Director of Research, International Conflict Research (Northern Ireland).  For the past three years, we have been collaborating on a joint exploratory research project called Evaluation in Extremis:  The Politics and Impact of Research in Violently Divided Societies, bringing together researchers, evaluators, advocates and evaluation commissioners from the global North and South. We looked at the most vexing challenges and promising avenues for improving evaluation practice in conflict-affected environments.

CHALLENGES Conflict Context Affects Evaluation – and vice versa.  Evaluation actors working in settings affected by militarized or non-militarized violence suffer from the typical challenges confronting development evaluation.  But, conflict context shapes how, where and when evaluations can be undertaken – imposing methodological, political, logistical, and ethical challenges. Equally, evaluation (its conduct, findings, and utilization) may affect the conflict context – directly, indirectly, positively or negatively.

Lessons Learned:

Extreme conditions amplify the risks to evaluation actors.  Contextual volatility and political hyper-sensitivity must be explicitly integrated into the planning, design, conduct, dissemination, and utilization of evaluation.

  1. Some challenges may be anticipated and prepared for, others may not. By recognizing the most likely dangers/opportunities at each stage in the evaluation process we are better prepared to circumvent “avoidable risks or harm” and to prepare for unavoidable negative contingencies.
  2. Deal with politico-ethics dilemmas. Being able to recognize when ethics dilemmas (questions of good, bad, right and wrong) collide with political dilemmas (questions of power and control) is an important analytical skill for both evaluators and their clients.  Speaking openly about how politics and ethics – and not only methodological and technical considerations – influence all facets of evaluation in these settings reinforces local social capital and improves evaluation transparency.
  3. The space for advocacy and policymaking can open or close quickly, requiring readiness to use findings posthaste. Evaluators need to be nimble, responsive, and innovative in their evaluation use strategies.

Rad Resources:

  • 2013 INCORE Summer School Course on Evaluation in Conflict Prone Settings , University of Ulster, Derry/ Londonderry (Northern Ireland. A 5-day skills building course for early to mid-level professionals facing evaluation challenges in conflict prone settings or involved in commissioning, managing, or conducting evaluations in a programming or policy-making capacity.
  • Kenneth Bush and Colleen Duggan ((2013) Evaluation in Extremis: the Politics and Impact of Research in Violently Divided Societies (SAGE: Delhi, forthcoming)

The Elusive Craft of Evaluating Advocacy

Original paper by Steven Teles, Department of Political Science, Johns Hopkins University, and Mark Schmitt, Roosevelt Institute. Published with support provided by The William and Flora Hewlett Foundation. Found courtesy of @alb202

A version of this paper was published in the Stanford Social Innovation Review  in May 2011 and is available as a pdf

“The political process is chaotic and often takes years to unfold, making it difficult to use traditional measures to evaluate the effectiveness of advocacy organizations. There are, however, unconventional methods one can use to evaluate advocacy organizations and make strategic investments in that arena”

Measuring Up: HIV-related advocacy evaluation training pack (draft)

HIV-related advocacy evaluation training for civil society organisations.

Produced by the International HIV/AIDS Alliance (Secretariat), International Council of AIDS Service Organizations (ICASO), July 2010, 38 pages. Available as .pdf

“This training pack is published by the Alliance and the International Council of AIDS Service Organizations (ICASO) and consists of two guides designed for advocacy, monitoring and evaluation staff of civil society organisations (including networks) who are involved in designing, implementing and assessing advocacy projects at different levels. The purpose of these guides is to increase users’ capacity to evaluate the progress and results of their advocacy work. The guides aim to:

1. help users to identify and confront the challenges faced by community-based organisations evaluating HIV-related advocacy
2. introduce new thinking for designing advocacy evaluations
3. give users the opportunity to apply some aspects of the evaluation design process to their specific contexts
4. make users aware that advocacy evaluation is a fast-growing and evolving field, with a large number of publications on advocacy evaluation design, approaches and methods available via the Internet and summarised in the resources section of the learner’s guide.”

Addressing accountability in NGO advocacy: Practice, principles and prospects of self-regulation

Michael Hammer, Charlotte Rooney, and Shana Warren
ISSN 2043-7943 Briefing paper number 125, March 2010. One World Trust.

“Global and national non-governmental organisations (NGOs) are the most distinct organisational form of civil society, and as such have become increasingly involved and influential in forming public opinion and policy through targeted and professional campaigning and policy advocacy. Yet their growing power has also raised questions about the basis on which they engage in these activities, including their accountability and legitimacy in view of frequent explicit or implicit claims these organisations make to social representation, the quality of their research work, and the public benefit they provide.

Based on a world-wide survey of civil society self-regulatory initiatives undertaken by the One World Trust this paper examines how NGOs have begun to address the accountability challenges they face in particular when engaging in advocacy and explains some of the strengths and weaknesses of existing self-regulation for NGOs engaged in advocacy.

Research presented in the paper suggests that both normative and instrumental reasons account for the adoption of accountability principles by advocacy organisations through self-regulation, and that lessons learnt from the One World Trust’s parallel work on accountability principles for policy oriented research organisations can be usefully applied also to strengthen accountability of advocacy NGOs.

The briefing identifies for each major dimension of accountability a set of initial good practice principles for advocacy organisations, including on:
• transparency of the evidence basis used in advocacy, of funding and funders for specific campaigns and activities, and around forward looking information such as strategy, and the processes used to determine advocacy priorities;
• opportunities for participation of beneficiaries and other key stakeholders of the organisation in the development of advocacy objectives and their review; and
• the development of criteria for evaluating the impact of advocacy with beneficiaries and other stakeholders, and the establishment of feedback and complaints handling mechanisms to address individual experiences and problematic impacts.

The paper concludes with the identification of remaining challenges for research and self-regulation practice to strengthen accountability in advocacy by NGOs: how to deal with inherent tensions between objectivity and messaging in purpose driven advocacy; how to protect independence, freedoms and role of NGOs in the public policy process, and how to strengthen the connection between ethical practice in fundraising and selfregulation of policy advocacy work”

ActionAid reports on “systematization”

From the ActionAid website

“Systematization is the reconstruction of and analytical reflection about an experience. Through systematization, events are interpreted in order to understand them… The systematization allows for the experience to be discussed and compared with other similar experiences, and with existing theories and, thus, contributes to an accumulation of knowledge produced from and for practice” (Systematization Permanent Workshop in AAI systematization resource pack, pg 10, 2009).

“In 2009, IASL has produced two excellent resources on systematization. The first is a resource pack, which is one of the few English language resources on this exciting methodology. The pack will inform you about the methodology, and give you a detailed orientation to how to systematize experiences. You will also find links to other systematization resources and examples, and an existing bibliography of systematization materials”

“The second resource is Advocacy for Change, a systematization of advocacy experiences related to the status of youth (in Guatemala), the right to education (in Brazil) and farming (in the United States). The systematizations allowed the actors involved to consider the evolution of the experiences and to identify lessons and insights for future interventions. The Guatemala systematization product was documented in writing and film, the US experience in writing, and the Brazil experience in film”

New INTRAC publications on M&E

Tracking Progress in Advocacy – Why and How to Monitor and Evaluate Advocacy Projects and Programmes looks at the scope of, and rational for, engaging in advocacy work as part of development interventions, then focuses on the monitoring and evaluating of these efforts – offering reasons why and when these processes should be planned and implemented, what’s involved, and who should be engaged in the process. By Janice Griffen, Dec, 2009

The Challenges of Monitoring and Evaluating Programmes offers some clarity in understanding the different uses of the term ‘programme’, and uses the different types of programme to demonstrate the issues that arise for M&E. By Janice Griffen, Dec, 2009

Pathfinder : A Practical Guide to Advocacy Evaluation

(email from Pathfinder) Hi Rick. We recently published a new resource for the advocacy evaluation field.  Our new guide, Pathfinder : A Practical Guide to Advocacy Evaluation, comes in three editions-one each for advocates, evaluators, and funders. All three editions, plus a bibliography of useful resources, are free to download and share from our website:

Pathfinder provides a “big picture” view for planning and conducting an advocacy evaluation. Drawn from Innovation Network’s research and consulting experience, the guide encourages the adoption of a “learning-focused evaluation” approach, which prioritizes using knowledge for improvement.

Evaluating international advocacy networks

Two papers by Ricardo Wilson-Grau

Evaluating the Effects of International Advocacy Networks, Ricardo Wilson-Grau and Martha Nu. “This “think piece” will first sketch the special challenges of evaluating the effects of the advocacy work of international social change networks. I will then present the approach to evaluating advocacy that I use. This essay is an adaptation of my most recent writing on the broader subject of “Complexity and International Social Change Networks,” which is a chapter in a book by the Global Partnership for the Prevention of Armed Conflict.These networks almost by definition have an advocacy component, which often is their central activity. Furthermore, my belief is that to a greater or lesser extent, the challenges and the general evaluation methodology I outline in this essay are applicable to almost all social change organisations. I leave that judgement, however, to the reader. ”

This is a paper presented at the Advocacy Impact Evaluation Workshop at the Evans School for Public Affairs, University of Washington, 4-6 December 2007, Seattle, WA, USA, sponsored by the Bill and Melinda Gates Foundation. A Spanish version “Evaluación de las redes internacionales de cambio social – Efectos y desafíos de las redes internacionales de incidencia” was published by Futuros 21: http://www.futuros21.infodetalle_articulo.asp?id_articulo=55

Evaluating International Social Change Network: A conceptual framework for a participatory approach”: Ricardo Wilson-Grau, 2007. “International networks for social change are growing in number and infuence.While they need to be able to assess the extent to which they achieve their purpose and determine ways in which to be more effective, conventional evaluation methods are not designed for such complex organisational forms, or for the diverse kinds of activity to which they are characteristically dedicated. Building on an earlier version of their paper, the authors present a set of principles and participatory approaches that are more appropriate to the task of evaluating such networks.”

Published in Development in Practice, Volume 17, Number 2, April 2007

A Handbook Of Data Collection Tools: Companion To “A Guide To Measuring Advocacy And Policy”

Author: Jane Reisman, Anne Gienapp, and Sarah Stachowiak
Publisher: Organizational Research Services
Publication Date: 2007

Abstract
What are examples of data collection tools for evaluating advocacy?

This handbook of tools is a companion to ORS’ “A Guide To Measuring Advocacy And Policy”. The data collection tools included in the handbook have actually been used to evaluate advocacy or related efforts. The data collection instruments apply to six outcomes areas:

* Shifts in Social Norms;
* Strengthened Organizational Capacity;
* Strengthened Alliances;
* Strengthened Base of Support;
* Improved Policies; and
* Changes in Impact.

%d bloggers like this: