Where there is no single Theory of Change: The uses of Decision Tree models

Eliciting tacit and multiple Theories of Change

Rick Davies, November 2012.Available as pdf  and a 4 page summary version

This paper begins by identifying situations where a theory-of-change led approach to evaluation can be difficult, if not impossible. It then introduces the idea of systematic rather than ad hoc data mining and the types of data mining approaches that exist. The rest of the paper then focuses on one data mining method known as Decision Trees, also known as Classification Trees.  The merits of Decision Tree models are spelled out and then the processes of constructing Decision Trees are explained. These include the use of computerised algorithms and ethnographic methods, using expert inquiry and more participatory processes. The relationships of Decision Tree analyses to related methods are then explored, specifically Qualitative Comparative Analysis (QCA) and Network Analysis. The final section of the paper identifies potential applications of Decision Tree analyses, covering the elicitation of tacit and multiple Theories of Change, the analysis of project generated data and the meta-analysis of data from multiple evaluations. Readers are encouraged to explore these usages.

Included in the list of merits of Decision Tree models is the possibility of differentiating what are necessary and/or sufficient causal conditions and the extent to which a cause is a contributory cause (a la Mayne)

Comments on this paper are being sought. Please post them below or email Rick Davies at rick@mande.co.uk

Separate but related:

See also: An example application of Decision Tree (predictive) models (10th April 2013)

Postscript 2013 03 20: Probably the best book on Decision Tree algorithms is:

Rokach, Lior, and Oded Z. Maimon. Data Mining with Decision Trees: Theory and Applications. World Scientific, 2008. A pdf copy is available

 

A Tale of Two Cultures: Qualitative and Quantitative Research in the Social Sciences

Gary Goertz & James Mahoney, 2012
Princeton University Press. Available on Amazon

Review of the book by Dan Hirschman

Excerpts from his review:

“Goertz, a political scientist, and Mahoney, a sociologist, attempt to make sense of the different cultures of research in these two camps without attempting to apply the criteria of one to the other. In other words, the goal is to illuminate difference and similarity rather than judge either approach (or, really, affiliated collection of approaches) as deficient by a universal standard.

G&M are interested in quantitative and qualitative approaches to causal explanation.

Onto the meat of the argument. G&M argue that the two cultures of quantitative and (causal) qualitative research differ in how they understand causality, how they use mathematics, how they privilege within-case vs. between-case variation, how they generate counterfactuals, and more. G&M argue, perhaps counter to our expectations, that both cultures have answers to each of these questions, and that the answers are reasonably coherent across cultures, but create tensions when researchers attempt to evaluate each others’ research: we mean different things, we emphasize different sorts of variation, and so on. Each of these differences is captured in a succinct chapter that lays out in incredible clarity the basic choices made by each culture, and how these choices aggregate up to very different models of research.

Perhaps the most counterintuitive, but arguably most rhetorically important, is the assertion that both quant and qual research are tightly linked to mathematics. For quant research, the connection is obvious: quantitative research relies heavily on probability and statistics. Causal explanation consists of statistically identifying the average effect of a treatment. For qual research, the claim is much more controversial. Rather than relying on statistics, G&M assert that qualitative research relies on logic and set theory, even if this reliance is often implicit rather than formal. G&M argue that at the core of explanation in the qualitative culture are the set theoretic/logical criteria of necessary and sufficient causes. Combinations of necessary and sufficient explanations constitute causal explanations. This search for non-trivial necessary and sufficient conditions for the appearance of an outcome shape the choices made in the qualitative culture, just as the search for significant statistical variation shapes quantitative resarch. G&M include a brief review of basic logic, and a quick overview of the fuzzy-set analysis championed by Charles Ragin. I had little prior experience with fuzzy sets (although plenty with formal logic), and I found this chapter extremely compelling and provocative. Qualitative social science works much more often with the notion of partial membership – some countries are not quite democracies, while others are completely democracies, and others are completely not democracies. This fuzzy-set approach highlight the non-linearities inherent in partial membership, as contrasted with quantitative approaches that would tend to treat “degree of democracy” as a smooth variable.”

Earlier paper by same authors available as pdf: A Tale of Two Cultures: Contrasting Quantitative and Qualitative Research
by James Mahoney, Gary Goertz. Political Analysis (2006) 14:227–249 doi:10.1093/pan/mpj017

See also these recent reviews:

See also The Logic of Process Tracing Tests in the Social Sciences by James Mahoney, Sociological Methods & Research, XX(X), 1-28 Published online 2 March 2012

RD comment: This books is recommended reading!

PS 15 February 2013: See Howard White’s new blog posting “Using the causal chain to make sense of the numbers” where he provides examples of the usefulness of simple set-theoretic analyses of the kind described by Mahoney and Goetz (e.g. in an analysis of arguments about why Gore lost to Bush in Florida)

 

On prediction, Nate Silver’s “The Signal and the Noise”

Title The Signal and the Noise: The Art and Science of Prediction
Author Nate Silver
Publisher Penguin UK, 2012
ISBN 1846147530, 9781846147531
Length 544 pages

Available on Amazon Use Google Books to read the first chapter.

RD Comment: Highly recommended reading. Reading this book reminded me of M&E data I had to examine on a large maternal and child health project in Indonesia. Rates on key indicators were presented for each of the focus districts for the year prior to the project started, then for each year during the four year project period. I remember thinking how variable these numbers were, there was nothing like a trend over time in any of the districts. Of course what I was looking at was probably largely noise, variations arising from changes in who and how the underlying data was collected and reported.This sort of situation is by no means uncommon. Most projects, if they have a base line at all, have baseline data from one year prior to when the project started. Subsequent measures of change are then, ideally, compared to that baseline. This arrangement assumes minimal noise, which is a tad optimistic. The alternative, which should not be so difficult in large bilateral projects dealing with health and education systems for example, would be to have a baseline data series covering the preceding x years, where x is at least as long as the expected duration of the proposed project.

See also Malkiel’s review in the Wall Street Journal (Telling Lies From Statistics). Malkiel is author of “A Random Walk Down Wall Street.” While a positive review overall, he charges Silver with ignoring false positives when claiming that some recent financial crises were predictable. Reviews also available in The Guardian. and LA Times. Nate Silver also writes a well known blog for the New York Times.

Special Issue on Systematic Reviews – J. of Development Effectiveness

Volume 4, Issue 3, 2012

  • Why do we care about evidence synthesis? An introduction to the special issue on systematic reviews
  • How to do a good systematic review of effects in international development: a tool kit
    • Hugh Waddington, Howard White, Birte Snilstveit, Jorge Garcia Hombrados, Martina Vojtkova, Philip Davies, Ami Bhavsar, John Eyers, Tracey Perez Koehlmoos, Mark Petticrew, Jeffrey C. Valentine & Peter Tugwell  pages 359-387Download full text
  • Systematic reviews: from ‘bare bones’ reviews to policy relevance
  • Narrative approaches to systematic review and synthesis of evidence for international development policy and practice
  • Purity or pragmatism? Reflecting on the use of systematic review methodology in development
  • The benefits and challenges of using systematic reviews in international development research
    • Richard Mallett, Jessica Hagen-Zanker, Rachel Slater & Maren Duvendack pages 445-455 Download full text
  • Assessing ‘what works’ in international development: meta-analysis for sophisticated dummies
    • Maren Duvendack, Jorge Garcia Hombrados, Richard Palmer-Jones & Hugh Waddington pages 456-471Download full text
  • The impact of daycare programmes on child health, nutrition and development in developing countries: a systematic review

Approches et pratiques en évaluation de programmes

Nuvelle édition revue et augmentée, Christian Dagenais, Valéry Ridde, 480 pages • août 2012. University of Montreal press

EN LIBRAIRIE À COMPTER DU 20 SEPTEMBRE 2012

Tous les chapitres de cette nouvelle édition ont été écrits par des pédagogues, des enseignants universitaires et des formateurs rompus depuis de longues années à l’exercice du partage de connaissances en évaluation de programmes, tout en mettant l’accent sur la pratique plutôt que sur la théorie. Nous avons ajouté quatre nouveaux chapitres, car les connaissances en évaluation évoluent constamment, sur la stratégie de l’étude de cas, l’évaluation économique, les approches participatives ou encore l’approche dite réaliste. Il manquait dans la première édition des exemples relatifs à l’usage des méthodes mixtes, décrites dans la première partie. Deux nouveaux chapitres viennent donc combler cette lacune.

Un défi essentiel auquel fait face tout enseignant en évaluation est lié à la maîtrise de la grande diversité des approches évaluatives et des types d’évaluation. La seconde partie de l’ouvrage présente quelques études de cas choisies pour montrer clairement comment les concepts qui auront été exposés sont employés dans la pratique. Ces chapitres recouvrent plusieurs domaines disciplinaires et proposent divers exemples de pratiques évaluatives.

Valéry Ridde, professeur en santé mondiale, et Christian Dagenais, professeur en psychologie, tous deux à l’Université de Montréal, enseignent et pratiquent l’évaluation de programmes au Québec, en Haïti et en Afrique.

Avec les textes d’Aristide Bado, Michael Bamberger, Murielle Bauchet, Diane Berthelette, Pierre Blaise, François Bowen, François Chagnon, Nadia Cunden, Christian Dagenais, Pierre-Marc Daigneault, Luc Desnoyers, Didier Dupont, Julie Dutil, Françoise Fortin, Pierre Fournier, Marie Gervais, Anne Guichard, Robert R. Haccoun, Janie Houle, Françoise Jabot, Steve Jacob, Kadidiatou Kadio, Seni Kouanda, Francine LaBossière, Isabelle Marcoux, Pierre McDuff, Miri Levin-Rozalis, Frédéric Nault-Brière, Bernard Perret, Pierre Pluye, Nancy L. Porteous, Michael Quinn Patton, Valéry Ridde, Émilie Robert, Patricia Rogers, Christine Rothmayr, Jim Rugh, Caroline Tourigny, Josefien Van Olmen, Sophie Witter, Maurice Yameogo et Robert K. Yin

A move to more systematic and transparent approaches in qualitative evidence synthesis

An update on a review of published papers.
By Karin Hannes and Kirsten Macaitis  Qualitative Research 2012 12: 402 originally published online 11 May 2012

Abstract

In 2007, the journal Qualitative Research published a review on qualitative evidence syntheses conducted between 1988 and 2004. It reported on the lack of explicit detail regarding methods for searching, appraisal and synthesis, and a lack of emerging consensus on these issues. We present an update of this review for the period 2005–8. Not only has the amount of published qualitative evidence syntheses doubled, but authors have also become more transparent about their searching and critical appraisal procedures. Nevertheless, for the synthesis component of the qualitative reviews, a black box remains between what people claim to use as a synthesis approach and what is actually done in practice. A detailed evaluation of how well authors master their chosen approach could provide important information for developers of particular methods, who seem to succeed in playing the game according to the rules. Clear methodological instructions need to be developed to assist others in applying these synthesis methods.

Working with Assumptions in International Development Program Evaluation

By Nkwake, Apollo M., with a Foreword by Michael Bamberger.  2013, 2013, XXI, 184 p. 14 illus., 7 in color. Published by Springer and available on Amazon

Publisher description

“Provides tools for understanding effective development programming and quality program evaluations Contains workshop materials for graduate students and in-service training for development evaluators The author brings together more than 12 years of experience in evaluation of international development programs

Regardless of geography or goal, development programs and policies are fueled by a complex network of implicit ideas. Stakeholders may hold assumptions about purposes, outcomes, methodology, and the value of project evaluation and evaluators—which may or may not be shared by the evaluators. Even when all participants share goals, failure to recognize and articulate assumptions can impede clarity and derail progress.

Working with Assumptions in International Development Program Evaluation probes their crucial role in planning, and their contributions in driving, global projects involving long-term change. Drawing on his extensive experience in the field, the author offers elegant logic and instructive examples to relate assumptions to the complexities of program design and implementation, particularly in weighing their outcomes. The book emphasizes clarity of purpose, respect among collaborators, and collaboration among team members who might rarely or never meet otherwise. Importantly, the book is a theoretical and practical volume that:

·          Introduces the multiple layers of assumptions on which global interventions are based.

·          Explores various approaches to the evaluation of complex interventions, with their underlying assumptions.

·          Identifies ten basic types of assumptions and their implications for program development and evaluation.

·          Provides examples of assumptions influencing design, implementation, and evaluation of development projects.

·          Offers guidelines in identifying, explicating, and evaluating assumptions

A first-of-its-kind resource, Working with Assumptions in International Development Program Evaluation opens out the processes of planning, implementation, and assessment for professionals in global development, including practitioners, development economists, global development program designers, and nonprofit personnel.”

Rick Davies comment: Looks potentially useful, but VERY expensive at £85.50 Few individuals will buy it but organisations might do so. Ideally the author would make a cheaper paperback version available. And Amazaon should provide a “Look inside this book” option, to help people decide if spending £85.50 would be worthwhile. PS: I think the publishers, and maybe the author, would fail the marshmellow test

Rick Davies postcript: The Foreword, Preface and Contents page of the book is available as a pdf, here on the Springer website.

See also:


What Causes What & Hypothesis testing: Truth and Evidence

Two very useful chapters in Denise Cummins (2012) “Good Thinking“, Cambridge University Press

Cummins is a professor of psychology and philosophy, both of which she brings to bear in this great book. Read an interview with author here

Contents include:

1. Introduction
2. Rational choice: choosing what is most likely to give you what you want
3. Game theory: when you’re not the only one choosing
4. Moral decision-making: how we tell right from wrong
5. The game of logic
6. What causes what?
7. Hypothesis testing: truth and evidence
8. Problem solving: another way of getting what you want
9. Analogy: this is like that.

New Directions for Evaluation: Promoting Valuation in the Public Interest: Informing Policies for Judging Value in Evaluation

Spring 2012, Volume 2012, Issue 133, Pages 1–129 Buy here

Editor’s Notes – George Julnes

  1. Editor’s notes (pages 1–2) Abstract PDF(22K)

Research Articles

  1. Managing valuation (pages 3–15)  George JulnesAbstract PDF(77K) References
    >
  2. The logic of valuing (pages 17–28)  Michael Scriven Abstract  PDF(63K) References
  3. The evaluator’s role in valuing: Who and with whom (pages 29–41)Marvin C. Alkin, Anne T. Vo and Christina A. Christie Abstract PDF(74K) References
  4. Step arounds for common pitfalls when valuing resources used versus resources produced (pages 43–52)Brian T. Yates Abstract PDF(60K) References
  5. When one must go: The Canadian experience with strategic review and judging program value (pages 65–75)François Dumaine Abstract  PDF(59K) References
  6. Valuing, evaluation methods, and the politicization of the evaluation process (pages 77–83)Eleanor Chelimsky Abstract PDF(46K) References
  7. Valuation and the American Evaluation Association: Helping 100 flowers bloom, or at least be understood? (pages 85–90)Michael Morris Abstract PDF(40K) References

Integrated Monitoring: A Practical Manual for Organisations That Want to Achieve Results

Written by Sonia Herrero, InProgress, Berlin, April 2012. 43 pages Available as pdf

“The aim of this manual is to help those working in the non-profit sector — non-governmental organisations (NGOs) and other civil society organisations (CSOs) — and the donors which fund them, to observe more accurately what they are achieving through their efforts and to ensure  that they make a positive difference in the lives of the people they want to help. Our interest in writing this guide has grown out of the desire to help bring some conceptual clarity to
the concepts of monitoring and to determine ways in which they can be harnessed and used more effectively by non-profit practitioners.

The goal is to help organisations build monitoring and evaluation into all your project management efforts. We want to demystify the monitoring process and make it as simple and accessible as possible. We have made a conscious choice to avoid technical language, and instead use images and analogies that are easier to grasp. There is a glossary at the end of the manual which contains the definitions of any terms you may be unfamiliar with. This manual is organised into two parts. The first section  covers the ‘what’ and ‘why’ of monitoring and  evaluation; the second addresses how to do it.”

These materials may be freely used and copied by non-profit organisations for capacity building purposes, provided that inProgress and authorship are acknowledged. They may not be reproduced for commercial gain.

Contents
Introduction
I. KEY ASPECTS OF MONITORING
1. What is Monitoring?
2. Why Do We Monitor and For Whom?
3. Who is Involved?
4. How Does it Work?
5. When Do We Monitor?
5. What Do We Monitor?
5.1 Monitoring What We DoII. HOW DO WE MONITOR?
1. Steps for Setting Up a Monitoring S   2. How to Monitor the Process and the Outputs
3. How to Monitor the Achievemen 3.1 Define Results/Outcomes
3.2 Define Indicators for Results
4. Prepare a Detailed Monitoring Plan
5. Identify Sources of Information
6. Data Collection
6.1 Tools for Data Compilation
7. Reflection and Analysis
7.1 Documenting and Sharing
8. Learning and Reviewing
8.1 Learning
8.2 Reviewing
9. Evaluation
Conclusion
Glossary
References