Guidance for designing, monitoring and evaluating peacebuilding projects: using theories of change

CARE, June 2012. Available as pdf

“To advance the use of theory-based inquiry within the field of peacebuilding, CARE International and International Alert undertook a two and a half year research project to develop light touch methods to monitor and evaluate peacebuilding projects, and pilot these in Democratic Republic of Congo (DRC), Nepal and Uganda. This document, Guidance for designing, monitoring and evaluating peacebuilding project: using theories of change emerges from the efforts of peacebuilders who field tested the processes to define and assess the changes to which they hoped to contribute.

The main audiences for this guide are conflict transformation and peacebuilding practitioners, non-governmental organisations (NGOs) and donor agencies. Other actors in the conflict transformation and peacebuilding field may also find it useful.”

Contents page

Acknowledgements
1. Overview
1.1 The problem we seek to address
1.2 The research that developed the guidance
1.3 Definitions
2. Theories of change
2.1 What is a theory of change?
2.2 Why is it important to explicitly state theories of change?
3. Using theories of change for project or programme design
3.1 Carry out a conflict analysis
3.2 Design an intervention
3.3 Develop a results hierarchy
3.4 Articulate the theories of change
4.  Monitoring and evaluating of a project or programme based on  its theories of change
4.1 Identify / refine the theories of change
4.2 Assess a project or programme’s relevance
4.3 Decide what you want to learn: choose which theory of change
4.4 Undertake outcome evaluation
4.5  Design a research plan using the monitoring and evaluation grid to assess  whether the theory of change is functioning as expected, and collect data according to the plan
4.6 Data collection methods
4.7 Helpful hints to manage data collection and analysis
4.8 Analysis of data
5.  Present your findings and ensure their use
Annex 1: Questions to ask to review a conflict analysis
Annex 2: A selection of conflict analysis tools and frameworks
Annex 3: Additional resources
Notes

Impact Evaluation: A Discussion Paper for AusAID Practitioners

“There are diverse views about what impact evaluations are and how they should be conducted. It is not always easy to identify and understand good approaches to impact evaluation for various development situations. This may limit the value that AusAID can obtain from impact evaluation.

This discussion paper aims to support appropriate and effective use of impact evaluations in AusAID by providing AusAID staff with information on impact evaluation. It provides staff who commission impact evaluations with a definition, guidance and minimum standards.

This paper, while authored by ODE, is an initiative of AusAID’s Impact Evaluation Working Group. The working group was formed by a sub-group of the Performance and Quality Network in 2011 to provide better coordination and oversight of impact evaluation in AusAID.”

ODE welcomes feedback on this discussion paper at ODE@ausaid.gov.au

Oxfam GB’s new Global Performance Framework + their Effectiveness Review reports

“As some of you will be aware, we have been working to develop and implement Oxfam GB’s new Global Performance Framework – designed to enable us to be accountable to a wide range of stakeholders and get better at understanding and communicating the effectiveness of a global portfolio comprised of over 250 programmes and 1,200 associated projects in 55 countries in a realistic, cost-effective, and credible way.  

The framework considers six core indicator areas for the organisation: humanitarian response, adaptation and risk reduction (ARR), livelihood enhancement, women’s empowerment, citizen voice, and polity influencing.  All relevant projects are required to report output data against these areas on an annual basis.  This – referred to as Global Output Reporting (GOR) – enables us to better understand and communicate the scale and scope of much of our work.  

To be fully accountable, however, we still want to understand and evidence whether all this work is bearing fruit.  We realise that this cannot be done by requesting all programmes to collect data against a global set of outcome indicators.   Such an exercise would be resource intensive and difficult to quality control.  Moreover, while it has the potential of generating interesting statistics, there would be no way of directly linking the observed outcome changes back to our work.  Instead, we drill down and rigorously evaluate random samples of our projects under each of the above thematic areas. We call these intensive evaluation processes Effectiveness Reviews.

The first year of effectiveness review reports are now up on the web, with our own Karl Hughes introducing the effort on the Poverty to Power blog today.  Here you will find introductory material, a summary of the results for 2011/12, two-page summaries of each effectiveness review, as well the full reports. Eventually, all the effectiveness reviews we carry out/commission will be available from this site, unless there are good reasons why they cannot be publicly shared, e.g. security issues.

Have a look, and please do send us your comments – either publically on the Poverty to Power blog or through this list serve, or bilaterally.  We very much value having ‘critical friends’ to help us think through and improve these processes.

Thanks,
Claire

Claire Hutchings
Global Advisor – Monitoring, Evaluation & Learning (Campaigns & Advocacy)
Programme Performance & Accountability Team
Oxfam GB
Work direct: +44 (0) 1865 472204
Skype: claire.hutchings.ogb

Special Issue on Systematic Reviews – J. of Development Effectiveness

Volume 4, Issue 3, 2012

  • Why do we care about evidence synthesis? An introduction to the special issue on systematic reviews
  • How to do a good systematic review of effects in international development: a tool kit
    • Hugh Waddington, Howard White, Birte Snilstveit, Jorge Garcia Hombrados, Martina Vojtkova, Philip Davies, Ami Bhavsar, John Eyers, Tracey Perez Koehlmoos, Mark Petticrew, Jeffrey C. Valentine & Peter Tugwell  pages 359-387Download full text
  • Systematic reviews: from ‘bare bones’ reviews to policy relevance
  • Narrative approaches to systematic review and synthesis of evidence for international development policy and practice
  • Purity or pragmatism? Reflecting on the use of systematic review methodology in development
  • The benefits and challenges of using systematic reviews in international development research
    • Richard Mallett, Jessica Hagen-Zanker, Rachel Slater & Maren Duvendack pages 445-455 Download full text
  • Assessing ‘what works’ in international development: meta-analysis for sophisticated dummies
    • Maren Duvendack, Jorge Garcia Hombrados, Richard Palmer-Jones & Hugh Waddington pages 456-471Download full text
  • The impact of daycare programmes on child health, nutrition and development in developing countries: a systematic review

Tools and Methods for Evaluating the Efficiency of Development Interventions

The report has been commissioned by the German Federal Ministry for Economic Cooperation and Development (BMZ).

Foreword: “Previous BMZ Evaluation Working Papers have focused on measuring impact. The present paper explores approaches for assessing efficiency. Efficiency is a powerful concept for decision making and ex post assessments of development interventions but, nevertheless, often treated rather superficially in project appraisal, project completion and evaluation reports.  Assessing efficiency is not an easy task but with potential for improvements, as the report shows. Starting with definitions and the theoretical foundations the author proposes a three level classification related to the analytical power of efficiency analysis methods. Based on an extensive literature review and a broad range of interviews, the report identifies and describes 15 distinct methods and explains how they can be used to assess efficiency. It concludes with an overall assessment of the methods described and with recommendations for their application and further development.”

Click here to download the presentation held at the meeting of the OECD DAC Network on Development Evaluation in Paris on June 24, 2011 and here for the presentation held at the annual conference of the American Evaluation Society in Anaheim on November 3, 2011.

For questions, you can reach the author at markus@devstrat.org.

We hope you enjoy the report,

Michaela Zintl (Head of Evaluation and Audit Division, Federal Ministry for Economic Cooperation and Development), Markus Palenberg. (Director, Institute for Development Strategy)

European Evaluation Society Conference Helsinki, 1-5 October, 2012 – docs available

New concepts – New challenges – New solutions

Helsinki, 1-5 October, 2012

Abstracts available as a 255 page online book, here  or as a pdf here [its big!]

Plus a list of participants (presenters and others), including their contact details

Follow Tweets about the conference, by the participants, at #eesconf

Learning about Gender Equality

“Testing the ability of the Most Significant Change Methodology to make cultural changes visible and learn about gender equality”

Published by Oxfam Novib, 2012 English version  French version

Foreword:

“Monitoring and evaluation systems make it possible for organizations to determine whether their programs are having the desired impact and achieving the changes they set out to create. These systems help us determine whether we are taking the right actions to reach our various objectives. Consequently, we can adjust our strategies accordingly  and be held accountable for our work.

Ideally.

Throughout the international development sector, organizations are struggling to build effective and efficient monitoring and evaluation systems that facilitate useful data collection and analysis. One of the areas in which these systems are providing limited insight, however, is that of cultural change in gender relations.

Oxfam Novib’s Gender Mainstreaming and Leadership Trajectory aims to incite cultural change, but, until recently, the monitoring and evaluation tools at hand were not able to capture those changes. In a search to find appropriate tools to make cultural change visible, we experimented with  the Most Significant Change methodology. This report presents the main findings of this action research as well as the associated lessons that may be beneficial to others facing similar issues.

Through this experiment, we have come to the conclusion that the Most Significant Change methodology brings additional value to our current monitoring and evaluation system and to that of our partner organization.The methodology helped us collect evidence of behavioural and attitudinal changes regarding gender equality. It encouraged critical reflection and learning on the way we look at these types of changes and on the strategies we use to promote gender equality.

It has been exciting to hear from women who are no longer looked at with disdain, but recognized as important agents of change. Equally encouraging were the stories of men who changed their perception of the abilities of their female colleagues and now see them as equally capable.

We also realized that this methodology brings its own challenges. Time, as well as human and financial resources, has to be made available to put the methodology into practice. Using the methodology also requires a new way of working. Instead of dealing with a linear chain of results, there is a need to find space to have an open discussion about significant changes that can be attributed to a program and how to support these changes. An open attitude towards discussing strategies and learning from successes and mistakes is essential. These are serious challenges in a rapidly changing development sector that faces an ever growing demand for quick and quantifiable results and increasing competition for available funding.

As agents of development, Oxfam Novib and our partner organizations are driven to support positive change. We are continuously seeking new ways of coming closer to our goal: a just world without poverty. The experiment with the Most Significant Change methodology has inspired us to rethink and improve our ways  of working. I hope this publication will do the same for you.”

Adrie Papma, Business Director, Oxfam Novib

See also:

The precarious nature of knowledge – a lesson that we have not yet learned?

Is medical science built on shaky foundations? by Elizabeth Iorns New Scientist article (15 Sept 2012).

The following text is relevant to the debate about the usefullness of randomised control trials (RCTs)  in assessing the impact of development aid initiatives. RCTs are an essential part of medical science research, but they are by no means the only research methods used. The article continues…

“More than half of biomedical findings cannot be reproduced – we urgently need a way to ensure that discoveries are properly checked

REPRODUCIBILITY is the cornerstone of science. What we hold as definitive scientific fact has been tested over and over again. Even when a fact has been tested in this way, it may still be superseded by new knowledge. Newtonian mechanics became a special case of Einstein’s general relativity; molecular biology’s mantra “one gene, one protein” became a special case of DNA transcription and translation.

One goal of scientific publication is to share results in enough detail to allow other research teams to reproduce them and build on them. However, many recent reports have raised the alarm that a shocking amount of the published literature in fields ranging from cancer biology to psychology is not reproducible.

Pharmaceuticals company Bayer, for example, recently revealed that it fails to replicate about two-thirds of published studies identifying possible drug targets (Nature Reviews Drug Discovery, vol 10, p 712).

Bayer’s rival Amgen reported an even higher rate of failure – over the past  decade its oncology and haematology researchers could not replicate 47 of 53 highly promising results they examined (Nature, vol 483, p 531). Because drug companies scour the scientific literature for promising leads, this is a good way to estimate how much biomedical research cannot be replicated. The answer: the majority” (read the rest of the article here)

See also: Should Deworming Policies in the Developing World be Reconsidered? The sceptical findings of a systematic review of the impact of de-worming initiatives in schools. De-worming has been one of the methods found effective via RCTs, and widely publicised as an example of how RCTs can really find out what works. The quote below is from Paul Garner’s comments on the systematic review. The same web page also has rejoinders to Garner’s comments, which are also worth reading.

“The Cochrane review on community programmes to deworm children of intestinal helminths has just been updated. We want people to read it, particularly those with an influence on policy, because it is important to understand the evidence, but the message is pretty clear. For the community studies where you treat all school children (which is what WHO advocates) there were some older studies which show an effect on weight gain after a single dose of deworming medicine; but for the most part, the effects on weight, haemoglobin, cognition, school attendance, and school performance are  either absent, small, or not statistically significant. We also found some surprises: a trial published in the British Medical Journal reported that deworming led to better weight gain in a trial of more than 27,000 children, but in fact the statistical test was wrong and in reality the trial did not detect a difference. We found a trial that examined school performance in 2659 children in Vietnam  that did not demonstrate a difference on cognition or weight that has never been published even though it was completed in 2006. We also note that a trial of 1 million children from India, which measured mortality and data collection completed in 2004, has never been published. This challenges the principles of scientific integrity. However, I heard within the last week that the authors do intend to get the results into the public domain-which is where it belongs.

We want to see powerful interventions that help people out of poverty, but they need to work, otherwise we are wasting everyone’s time and money. Deworming schoolchildren to rid them of intestinal helminths seems a good idea in theory, but the evidence for it just doesn’t stack up. We want policy makers to look at the evidence and the message and consider if deworming is as good as it is cracked up to be.”

Taylor-Robinson et al. “Deworming drugs for soil-transmitted intestinal worms in children: effects on nutritional indicators, haemoglobin and school performance” Cochrane Database of Systematic Reviews 2012.

See also: Truth decay: The half-life of facts ,by Samuel Arbesman, New Scientist, 19 September 2012

IN DENTAL school, my grandfather was taught the number of chromosomes in a human cell. But there was a problem. Biologists had visualised the nuclei of human cells in 1912 and counted 48 chromosomes, and it was duly entered into the textbooks studied by my grandfather. In 1953, the prominent cell biologist Leo Sachs even said that “the diploid chromosome number of 48 in man can now be considered as an established fact”.

Then in 1956, Joe Hin Tjio and Albert Levan tried a new technique for looking at cells. They counted over and over until they were certain they could not be wrong. When they announced their result, other researchers remarked that they had counted the same, but figured they must have made a mistake. Tjio and Levan had counted only 46 chromosomes, and they were right.

Science has always been about getting closer to the truth, …

See also book by the same author “The Half-Life of Facts: Why Everything We Know Has an Expiration Date on Amazon. Published October 2012

See also: Why Most Biomedical Findings Echoed by Newspapers Turn out to be False: the Case of Attention Deficit Hyperactivity Disorder by François Gonon, Jan-Pieter Konsman, David Cohen and Thomas Boraud, Plos One, 2012

Summary: Newspapers biased toward reporting early studies that may later be refuted : 7 of top 10 ADHD studies covered by media later attenuated or refuted without much attention

Newspaper coverage of biomedical research leans heavily toward reports of initial findings, which are frequently attenuated or refuted by later studies, leading to disproportionate media coverage of potentially misleading early results, according to a report published Sep. 12 in the open access journal PLOS ONE.

The researchers, led by Francois Gonon of the University of Bordeaux, used ADHD (attention deficit hyperactivity disorder) as a test case and identified 47 scientific research papers published during the 1990’s on the topic that were covered by 347 newspaper articles. Of the top 10 articles covered by the media, they found that 7 were initial studies. All 7 were either refuted or strongly attenuated by later research, but these later studies received much less media attention than the earlier papers. Only one out of the 57 newspaper articles echoing on these subsequent studies mentioned that the corresponding initial finding has been attenuated. The authors write that, if this phenomenon is generalizable to other health topics, it likely causes a great deal of distortion in health science communication.

See alsoThe drugs dont work – a modern medical scandal. The doctors prescribing them don’t know that. Nor do their patients. The manufacturers know full well, but they’re not telling”  by Ben Goldacre, he Guardian Weekend, 22 September 2012 p21-29

Excerpt: “In 2010, researchers from Harvard and Toronto found all the trials looking at five major classes of drug – antidepressants, ulcer drugs and so on – then measured two key features: were they positive, and were they funded by industry? They found more than 500 trials in total: 85% of the industry-funded studies were positive, but only 50% of the government-funded trials were. In 2007, researchers looked at every published trial that set out to explore the benefits of a statin. These cholesterol-lowering drugs reduce your risk of having a heart attack and are prescribed in very large quantities. This study found 192 trials in total, either comparing one statin against another, or comparing a statin against a different kind of treatment. They found that industry-funded trials were 20 times more likely to give results favouring the test drug.

These are frightening results, but they come from individual studies. So let’s consider systematic reviews into this area. In 2003, two were published. They took all the studies ever published that looked at whether industry funding is associated with pro-industry results, and both found that industry-funded trials were, overall, about four times more likely to report positive results. A further review in 2007 looked at the new studies in the intervening four years: it found 20 more pieces of work, and all but two showed that industry-sponsored trials were more likely to report flattering results.

It turns out that this pattern persists even when you move away from published academic papers and look instead at trial reports from academic conferences. James Fries and Eswar Krishnan, at the Stanford University School of Medicine in California, studied all the research abstracts presented at the 2001 American College of Rheumatology meetings which reported any kind of trial and acknowledged industry sponsorship, in order to find out what proportion had results that favoured the sponsor’s drug.”

The results section is a single, simple and – I like to imagine – fairly passive-aggressive sentence: “The results from every randomised controlled trial (45 out of 45) favoured the drug of the sponsor.”

Read more in Ben Goldacre’s new bookBad Pharma: How drug companies mislead doctors and harm patients” published in Sept 2012

See also Reflections on bias and complexity May 29, 2012 by Ben Ramalingam, which talks about a paper in Nature, May 2012 by Daniel Sarewitz, titled “Beware the creeping cracks of bias: Evidence is mounting that research is riddled with systematic errors. Left unchecked, this could erode public trust…”

Review of the use of ‘Theory of Change’ in International Development

By Isabel Vogel. Funded by DFID, 2012

Review of the use of ‘Theory of Change’ in international development (full report)
Review of the use of ‘Theory of Change’ in international development (summary)
Appendix 3: Examples of Theories of Change

1. Executive Summary
‘Theory of change’ is an outcomes-based approach which applies critical thinking to the design, implementation and evaluation of initiatives and programmes intended to support change in their contexts. It is being increasingly used in international development by a wide range of governmental, bilateral and multi-lateral development agencies, civil society organisations, international non-governmental organisations and research programmes intended to support development outcomes. The UK’s Department for International Development (DFID) commissioned this review of how theory of change is being used in order to learn from this growing area of practice. DFID has been working formally with theory of change in in its programming since 2010. The purpose was to identify areas of consensus, debate and innovation in order to inform a more consistent approach within DFID.
Continue reading “Review of the use of ‘Theory of Change’ in International Development”

UNDERSTANDING ‘THEORY OF CHANGE IN INTERNATIONAL DEVELOPMENT: A REVIEW OF EXISTING KNOWLEDGE

DANIELLE STEIN AND CRAIG VALTERS, JULY 2012. Available as pdf.

THIS PUBLICATION IS AN OUTPUT FROM A COLLABORATION BETWEEN THE ASIA FOUNDATION AND THE [LSE] JUSTICE AND SECURITY RESEARCH PROGRAMME

Summary

This is a review of the concepts and common debates within ‘Theory of Change’ (ToC) material, resulting from a search and detailed analysis of available donor, agency and expert guidance documents. The review was undertaken as part of a Justice and Security Research Program

(TAF) collaborative project, and focuses on the field of international development. The project will explore the use of Theories of Change (ToCs) in international development programming, with field research commencing in August 2012. While this document will specifically underpin the research of this collaboration, we also hope it will be of interest to a wider audience of those attempting to come to grips with ToC and its associated literature.

From the literature, we find that there is no consensus on how to define ToC, although it is commonly understood as an articulation of how and why a given intervention will lead to specific change. We identify four main purposes of ToC – strategic planning, description, monitoring and evaluation and learning – although these inevitably overlap. For this reason, we have adopted the term ‘ToC approaches’ to identify the range of applications associated with this term. Additionally, we identify some confusion in the terminology associated with ToC. Of particular note is the lack of clarity surrounding the use of the terms ‘assumption’ and ‘evidence’. Finally, we have also drawn out information on what authors feel makes for ToC ‘best practice’ in terms of both content and process, alongside an exploration of the remaining gaps where more clarity is needed.

A number of ‘key issues’ are highlighted throughout this review. These points are an attempt to frame the literature reviewed analytically, as informed by the specific focus of the JSRP-TAF collaboration. These issues are varied and include the confusion surrounding ToC definitions and use, the need to ‘sell’ a ToC to a funder, how one can know which ‘level’ a ToC should operate on, the relationship between ToC and evidence-based policy, and the potential for accuracy, honesty and transparency in the use of ToC approaches.

This paper does not aim to give definitive answers on ToC; indeed there are many remaining important issues that lie beyond the scope of this review. However, in highlighting a number of key issues surrounding current understandings of ToC approaches, this review hopes to pave the way for more constructive and critical discussion of both the concept and practical application of ToCs.

%d bloggers like this: