Making causal claims

by John Mayne. ILAC Brief, October 2012 Available as pdf

“An ongoing challenge in evaluation is the need to make credible causal claims linking observed results to the actions of interventions. In the very common situation where the intervention is only one of a number of causal factors at play, the problem is compounded – no one factor ’caused’ the result. The intervention on its own is neither necessary nor sufficient to bring about the result. The Brief argues the need for a different perspective on causality. One can still speak of the intervention making a difference in the sense that the intervention was a necessary element of a package of causal factors that together were sufficient to bring about the results. It was a contributory cause. The Brief further argues that theories of change are models showing how an intervention operates as a contributory cause. Using theories of change, approaches such as contribution analysis can be used to demonstrate that the intervention made a difference – that it was a contributory cause – and to explain how and why.”

See also Making Causal Claims by John Mayne at IPDET 2012, Ottawa

RD Comments:

What I like in this paper: The definition of a contributory cause as something neither necessary or sufficient, but a necessary part of a package of causes that is sufficient for an outcome to occur

I also like the view that “theories of change are models of causal sufficiency”

But I query the usefullness of distinguishing between contributory causes that are triggering causes, sustaining causes and enabling causes, mainly on the grounds of the difficulty of reliably identifying them

I am more concerned with the introduction of probablistic statements about “likely” necessity and “likely” sufficiency, because it increases the ease with which claims of casual contribution can be made, perhaps way too much. Michael Patton  recently expressed a related anxiety: “There is a danger that as stakeholders learn about the non-linear dynamics of complex systems and come to value contribution analysis , they will be inclined to always find some kind of linkage between implemented activities and desired outcomes ….In essence, the concern is that treating contribution as the as the criterion (rather than direct attribution) is so weak that a finding of no contribution is extremely unlikely

John Mayne’s paper distinguishes between four approaches to demonstrating causality (adapted from Stern et al., 2012:16-17):

  • “Regularity frameworks that depend on the frequency of association between cause and effect – the basis for statistical approaches making causal claims
  • Counterfactual frameworks that depend on the difference between two otherwise identical cases – the basis for experimental and quasiexperimental approaches to making causal claims
  • Comparative frameworks that depend on combinations of causes that lead to an effect – the basis for ‘configurational’ approaches to making causal claims, such as qualitative comparative analysis
  • Generative frameworks that depend on identifying the causal links and mechanisms that explain effects – the basis for theory-based and realist approaches to making causal claims .”
.
I would simplify these into two broad categories, with sub-categories:
  • Claims can be made about the co-variance of events
    • Counterfactual approaches: describing the effects of the absence and presence of an intervention on an outcome of interest,  when all other conditions being kept the same
    • Configurational approaches, describing the effects of the presence and absence of multiple conditions (relating to both context and intervention)
    • Statistical approaches, describing the effects of more complex mixes of variables
  • Claims can be made about causal mechanisms underlying each co-variance that is found

Good causal claims contain both 1 and 2: evidence of co-variance and plausable or testable explanations of why each co-variance exists. One without the other is insufficient. You can start with theory (a proposed mechanism) and look for supporting co-variance, or start with a co-variance and look for a supporting mechanism. Currently theory led approaches are in vogue.

For more on causal mechanisms, see Causal Mechanisms in the Social Sciences by Peter Hedstrom and Petri Ylikoski
See also my blog posting on Representing different combinations of causal conditions, for emans of distingishing different configurations of necessary and sufficient conditions

Free relevant well organised online courses: Statistics, Model Thinking and others

Provided FREE by Coursera in cooperation with Princeton, Stanford and other Universities

Each opening page gives this information: about the Course, About the Instructor, The Course Sylabus, Introductory Video, Recommended Background, Suggested Readings, Course Format, FAQs,

Example class format: “Each week of class consists of multiple 8-15 minute long lecture videos, integrated weekly quizzes, readings, an optional assignment and a discussion. Most weeks will also have a peer reviewed assignment, and there will be the opportunity to participate in a community wiki-project. There will be a comprehensive exam at the end of the course.”

The contents of past courses remain accessible.

RD Comment: Highly Recomended! [ I am doing the stats course this week]

Evaluating the impact of knowledge brokering work

“Analysis of an e-discussion on the Knowledge Brokers’ Forum . Available as pdf.

by Catherine Fisher, Impact and Learning Team, Institute of Development Studies , January 2012

Introduction…

“This paper summarises a rich discussion about how to evaluate the impact of Knowledge Brokering work that took place on the Knowledge Brokers Forum during October and November 2011.  The debate aimed to share members experience and insights about evaluating impact in order to be better able to evaluate our own work and build greater understanding of the potential of the sector.   This summary aims to draw together the richness of the discussion, bring together themes and identify emerging areas of consensus and ideas for action.   ”

CONTENTS
1. Introduction ……………………………………………………………………………………………………………………………… 2
2. Understanding the purpose of your work is the basis for evaluation ……………………………………………….. 3
3. Be clear why you are evaluating your work ………………………………………………………………………………….. 4
4 .Understand what you mean by impact ………………………………………………………………………………………… 5
5. The challenge of indicators and metrics ……………………………………………………………………………………….. 7
6. Methodologies and approaches ………………………………………………………………………………………………….. 8
7. Looking forwards ………………………………………………………………………………………………………………………. 9
8. Statistics and feedback about e-discussion…………………………………………………………………………………..10
9. Contributors………………………………………………………………………………………………………………………………10

See also:  Background note for e-discussion on evaluating the impact of knowledge, by brokering work, October 2011, Catherine Fisher.

 

Guidance for designing, monitoring and evaluating peacebuilding projects: using theories of change

CARE, June 2012. Available as pdf

“To advance the use of theory-based inquiry within the field of peacebuilding, CARE International and International Alert undertook a two and a half year research project to develop light touch methods to monitor and evaluate peacebuilding projects, and pilot these in Democratic Republic of Congo (DRC), Nepal and Uganda. This document, Guidance for designing, monitoring and evaluating peacebuilding project: using theories of change emerges from the efforts of peacebuilders who field tested the processes to define and assess the changes to which they hoped to contribute.

The main audiences for this guide are conflict transformation and peacebuilding practitioners, non-governmental organisations (NGOs) and donor agencies. Other actors in the conflict transformation and peacebuilding field may also find it useful.”

Contents page

Acknowledgements
1. Overview
1.1 The problem we seek to address
1.2 The research that developed the guidance
1.3 Definitions
2. Theories of change
2.1 What is a theory of change?
2.2 Why is it important to explicitly state theories of change?
3. Using theories of change for project or programme design
3.1 Carry out a conflict analysis
3.2 Design an intervention
3.3 Develop a results hierarchy
3.4 Articulate the theories of change
4.  Monitoring and evaluating of a project or programme based on  its theories of change
4.1 Identify / refine the theories of change
4.2 Assess a project or programme’s relevance
4.3 Decide what you want to learn: choose which theory of change
4.4 Undertake outcome evaluation
4.5  Design a research plan using the monitoring and evaluation grid to assess  whether the theory of change is functioning as expected, and collect data according to the plan
4.6 Data collection methods
4.7 Helpful hints to manage data collection and analysis
4.8 Analysis of data
5.  Present your findings and ensure their use
Annex 1: Questions to ask to review a conflict analysis
Annex 2: A selection of conflict analysis tools and frameworks
Annex 3: Additional resources
Notes

Impact Evaluation: A Discussion Paper for AusAID Practitioners

“There are diverse views about what impact evaluations are and how they should be conducted. It is not always easy to identify and understand good approaches to impact evaluation for various development situations. This may limit the value that AusAID can obtain from impact evaluation.

This discussion paper aims to support appropriate and effective use of impact evaluations in AusAID by providing AusAID staff with information on impact evaluation. It provides staff who commission impact evaluations with a definition, guidance and minimum standards.

This paper, while authored by ODE, is an initiative of AusAID’s Impact Evaluation Working Group. The working group was formed by a sub-group of the Performance and Quality Network in 2011 to provide better coordination and oversight of impact evaluation in AusAID.”

ODE welcomes feedback on this discussion paper at ODE@ausaid.gov.au

Oxfam GB’s new Global Performance Framework + their Effectiveness Review reports

“As some of you will be aware, we have been working to develop and implement Oxfam GB’s new Global Performance Framework – designed to enable us to be accountable to a wide range of stakeholders and get better at understanding and communicating the effectiveness of a global portfolio comprised of over 250 programmes and 1,200 associated projects in 55 countries in a realistic, cost-effective, and credible way.  

The framework considers six core indicator areas for the organisation: humanitarian response, adaptation and risk reduction (ARR), livelihood enhancement, women’s empowerment, citizen voice, and polity influencing.  All relevant projects are required to report output data against these areas on an annual basis.  This – referred to as Global Output Reporting (GOR) – enables us to better understand and communicate the scale and scope of much of our work.  

To be fully accountable, however, we still want to understand and evidence whether all this work is bearing fruit.  We realise that this cannot be done by requesting all programmes to collect data against a global set of outcome indicators.   Such an exercise would be resource intensive and difficult to quality control.  Moreover, while it has the potential of generating interesting statistics, there would be no way of directly linking the observed outcome changes back to our work.  Instead, we drill down and rigorously evaluate random samples of our projects under each of the above thematic areas. We call these intensive evaluation processes Effectiveness Reviews.

The first year of effectiveness review reports are now up on the web, with our own Karl Hughes introducing the effort on the Poverty to Power blog today.  Here you will find introductory material, a summary of the results for 2011/12, two-page summaries of each effectiveness review, as well the full reports. Eventually, all the effectiveness reviews we carry out/commission will be available from this site, unless there are good reasons why they cannot be publicly shared, e.g. security issues.

Have a look, and please do send us your comments – either publically on the Poverty to Power blog or through this list serve, or bilaterally.  We very much value having ‘critical friends’ to help us think through and improve these processes.

Thanks,
Claire

Claire Hutchings
Global Advisor – Monitoring, Evaluation & Learning (Campaigns & Advocacy)
Programme Performance & Accountability Team
Oxfam GB
Work direct: +44 (0) 1865 472204
Skype: claire.hutchings.ogb

Special Issue on Systematic Reviews – J. of Development Effectiveness

Volume 4, Issue 3, 2012

  • Why do we care about evidence synthesis? An introduction to the special issue on systematic reviews
  • How to do a good systematic review of effects in international development: a tool kit
    • Hugh Waddington, Howard White, Birte Snilstveit, Jorge Garcia Hombrados, Martina Vojtkova, Philip Davies, Ami Bhavsar, John Eyers, Tracey Perez Koehlmoos, Mark Petticrew, Jeffrey C. Valentine & Peter Tugwell  pages 359-387Download full text
  • Systematic reviews: from ‘bare bones’ reviews to policy relevance
  • Narrative approaches to systematic review and synthesis of evidence for international development policy and practice
  • Purity or pragmatism? Reflecting on the use of systematic review methodology in development
  • The benefits and challenges of using systematic reviews in international development research
    • Richard Mallett, Jessica Hagen-Zanker, Rachel Slater & Maren Duvendack pages 445-455 Download full text
  • Assessing ‘what works’ in international development: meta-analysis for sophisticated dummies
    • Maren Duvendack, Jorge Garcia Hombrados, Richard Palmer-Jones & Hugh Waddington pages 456-471Download full text
  • The impact of daycare programmes on child health, nutrition and development in developing countries: a systematic review

Tools and Methods for Evaluating the Efficiency of Development Interventions

The report has been commissioned by the German Federal Ministry for Economic Cooperation and Development (BMZ).

Foreword: “Previous BMZ Evaluation Working Papers have focused on measuring impact. The present paper explores approaches for assessing efficiency. Efficiency is a powerful concept for decision making and ex post assessments of development interventions but, nevertheless, often treated rather superficially in project appraisal, project completion and evaluation reports.  Assessing efficiency is not an easy task but with potential for improvements, as the report shows. Starting with definitions and the theoretical foundations the author proposes a three level classification related to the analytical power of efficiency analysis methods. Based on an extensive literature review and a broad range of interviews, the report identifies and describes 15 distinct methods and explains how they can be used to assess efficiency. It concludes with an overall assessment of the methods described and with recommendations for their application and further development.”

Click here to download the presentation held at the meeting of the OECD DAC Network on Development Evaluation in Paris on June 24, 2011 and here for the presentation held at the annual conference of the American Evaluation Society in Anaheim on November 3, 2011.

For questions, you can reach the author at markus@devstrat.org.

We hope you enjoy the report,

Michaela Zintl (Head of Evaluation and Audit Division, Federal Ministry for Economic Cooperation and Development), Markus Palenberg. (Director, Institute for Development Strategy)

Approches et pratiques en évaluation de programmes

Nuvelle édition revue et augmentée, Christian Dagenais, Valéry Ridde, 480 pages • août 2012. University of Montreal press

EN LIBRAIRIE À COMPTER DU 20 SEPTEMBRE 2012

Tous les chapitres de cette nouvelle édition ont été écrits par des pédagogues, des enseignants universitaires et des formateurs rompus depuis de longues années à l’exercice du partage de connaissances en évaluation de programmes, tout en mettant l’accent sur la pratique plutôt que sur la théorie. Nous avons ajouté quatre nouveaux chapitres, car les connaissances en évaluation évoluent constamment, sur la stratégie de l’étude de cas, l’évaluation économique, les approches participatives ou encore l’approche dite réaliste. Il manquait dans la première édition des exemples relatifs à l’usage des méthodes mixtes, décrites dans la première partie. Deux nouveaux chapitres viennent donc combler cette lacune.

Un défi essentiel auquel fait face tout enseignant en évaluation est lié à la maîtrise de la grande diversité des approches évaluatives et des types d’évaluation. La seconde partie de l’ouvrage présente quelques études de cas choisies pour montrer clairement comment les concepts qui auront été exposés sont employés dans la pratique. Ces chapitres recouvrent plusieurs domaines disciplinaires et proposent divers exemples de pratiques évaluatives.

Valéry Ridde, professeur en santé mondiale, et Christian Dagenais, professeur en psychologie, tous deux à l’Université de Montréal, enseignent et pratiquent l’évaluation de programmes au Québec, en Haïti et en Afrique.

Avec les textes d’Aristide Bado, Michael Bamberger, Murielle Bauchet, Diane Berthelette, Pierre Blaise, François Bowen, François Chagnon, Nadia Cunden, Christian Dagenais, Pierre-Marc Daigneault, Luc Desnoyers, Didier Dupont, Julie Dutil, Françoise Fortin, Pierre Fournier, Marie Gervais, Anne Guichard, Robert R. Haccoun, Janie Houle, Françoise Jabot, Steve Jacob, Kadidiatou Kadio, Seni Kouanda, Francine LaBossière, Isabelle Marcoux, Pierre McDuff, Miri Levin-Rozalis, Frédéric Nault-Brière, Bernard Perret, Pierre Pluye, Nancy L. Porteous, Michael Quinn Patton, Valéry Ridde, Émilie Robert, Patricia Rogers, Christine Rothmayr, Jim Rugh, Caroline Tourigny, Josefien Van Olmen, Sophie Witter, Maurice Yameogo et Robert K. Yin

European Evaluation Society Conference Helsinki, 1-5 October, 2012 – docs available

New concepts – New challenges – New solutions

Helsinki, 1-5 October, 2012

Abstracts available as a 255 page online book, here  or as a pdf here [its big!]

Plus a list of participants (presenters and others), including their contact details

Follow Tweets about the conference, by the participants, at #eesconf