Multiple Pathways to Policy Impact: Testing an Uptake Theory with QCA

by Barbara Befani, IDS Centre for Development Impact, PRACTICE PAPER. Number 05 October 2013. Available as pdf

Abstract: Policy impact is a complex process influenced by multiple factors. An intermediate step in this process is policy uptake, or the adoption of measures by policymakers that reflect research findings and recommendations. The path to policy uptake often involves activism, lobbying and advocacy work by civil society organisations, so an earlier intermediate step could be termed ‘advocacy uptake’; which would be the use of research findings and recommendations by Civil Society Organisations (CSOs) in their efforts to influence government policy. This CDI Practice Paper by Barbara Befani proposes a ‘broad-brush’ theory of policy uptake (more precisely of ‘advocacy uptake’) and then tests it using two methods: (1) a type of statistical analysis and (2) a variant of Qualitative Comparative Analysis (QCA). The pros and cons of both families of methods are discussed in this paper, which shows that QCA offers the power of generalisation whilst also capturing some of the complexity of middle-range explanation. A limited number of pathways to uptake are identified, which are at the same time moderately sophisticated (considering combinations of causal factors rather than additions) and cover a medium number of cases (40), allowing a moderate degree of generalisation. – See more at: http://www.ids.ac.uk/publication/multiple-pathways-to-policy-impact-testing-an-uptake-theory-with-qca#sthash.HEg4Smra.dpuf

Rick Davies comment: What I  like about this paper is the way it shows, quite simply, how measurements of the contribution of different possible causal conditions in terms of averages, and correlations between these, can be uniformative and even misleading. In contrast, a QCA analysis of the different configurations of causal conditions can be much more enlightening and easier to relate to what are often complex realities in the ground.

I have taken the liberty of re-analysing the fictional data set provided in the annex, using a Decision Tree software (within RapidMiner). This is a means of triangulating the results of QCA analyses. It uses the same kind of data set and produces results which are comparable in structure, but the method of analysis is different. Shown below is a Decision Tree representing seven configurations of conditions that can be found in Befani’s data set of 40 cases. It makes use of 4 of the five conditions described in the paper. These are shown as nodes in the tree diagram.

Befani 2013 10(click on image to enlarge and get a clearer image!)

The 0 and 1 values on the various branches indicate whether the condition immediately above is present or not. The first configuration on the left says that if there is no ACCESS then research UPTAKE (12 cases at the red leaf) does not take place. This is a statement of a sufficient cause. The branch on the right, represents a configuration of three conditions, which says that where ACCESS to research is present, and recommendations are consistent with measures previously (PREV) recommended by the organisation, and where the research findings are disseminated within the organisation by a local ‘champion (CHAMP) then research UPTAKE  (8 cases at the blue leaf) does take place.

Overall the findings shown in the Decision Tree model are consistent with the QCA analyses in terms of the number of configurations (seven) and the configurations that are associated with the largest number of cases (i.e. their coverage). However there were small differences in descriptions of two sets of cases where there was no uptake (red leaves). In the third branch (configuration) from the left above, the QCA analysis indicated that it was the presence of INTERNAL CONFLICT (different approaches to the same policy problem within the organisation) that played a role, rather than the presence of a (perhaps ineffectual) CHAMPION. In the third branch (configuration) from the right the QCA analysis proposed a fourth necessary condition (QUALITY), in addtion to the three shown in the Decision Tree. Here the Decision Tree seems the more parsimonious solution. However, in both sets of cases where differences in findings have occured it would make most sense to then proceed with within-case investigations of the causal processes at work.

PS: Here is the dataset, in case anyone wants to play with it

Webinar series on evaluation: The beginnings of a list

To be extended and updated, with your help!

  • American Evaluation Association: Coffee Break Demonstrations are 20 minute long webinars designed to introduce audience members to new tools, techniques, and strategies in the field of evaluation.
  • INTERACTION: Impact Evaluation Guidance Note and Webinar Series: 8 webinars covering Introduction to Impact Evaluation, Linking Monitoring and Evaluation to Impact Evaluation, Introduction to Mixed Methods in Impact Evaluation, Use of Impact Evaluation Results
  • Measure Evaluation webinars:     20 webinars since Jan 2012
  • Claremont Evaluation Center Webinar Series  “The Claremont Evaluation Center is pleased to offer a series of webinars on the discipline and profession of evaluation.  This series is free and available to anyone across the globe with an internet connection.”
  • MY M&E website: Webinars on Equity-focused evaluations (17 webinars), IOCE webinar series on evaluation associations, Emerging practices in development evaluation (6 webinars), Developing capacities for country M&E systems (16 webinars), Country-led M&E Systems (6 webinars)

Plus some guidance on developing and evaluating webinars

ICAI Seeks Views on Revised Evaluation Framework

 

 “In our first report, ICAI’s Approach to Effectiveness and Value for Money,we set out an evaluation framework, consisting of 22 questions under 4 guiding criteria (objectives, delivery, impact and learning), to guide our lines of enquiry in reviews. In the light of our experience to date in carrying out our reports, we have reviewed this framework. The revised framework is available at this link: ICAI revised evaluation framework

We are now entering a period of consultation on the revised framework which will run until 24 May 2013. If you have any comments or views, please email enquiries@icai.independent.gov.uk  or post them to: The Secretariat, Independent Commission for Aid Impact, Dover House, 66 Whitehall, London SW1A 2AU”

AEA resources on Social Network Analysis and Evaluation

American Evaluation Association (AEA) Social Network  Analysis (SNA) Topical Interest Group (TIG) resources

AEA365 | A Tip-a-Day by and for Evaluators

Who Counts? The power of participatory statistics

Edited By Jeremy Holland, published by Practical Action. 2013

(from the Practical Action website) “Local people can generate their own numbers – and the statistics that result are powerful for themselves and can influence policy. Since the early 1990s there has been a quiet tide of innovation in generating statistics using participatory methods. Development practitioners are supporting and facilitating participatory statistics from community-level planning right up to sector and national-level policy processes. Statistics are being generated in the design, monitoring and evaluation, and impact assessment of development interventions.Through chapters describing policy, programme and project research, Who Counts? provides impetus for a step change in the adoption and mainstreaming of participatory statistics within international development practice. The challenge laid down is to foster institutional change on the back of the methodological breakthroughs and philosophical commitment described in this book. The prize is a win–win outcome in which statistics are a part of an empowering process for local people and part of a real-time information flow for those aid agencies and government departments willing to generate statistics in new ways. Essential reading for researchers and students of international development as well as policy-makers, managers and practitioners in development agencies.”
Table of Contents
1 Introduction Participatory statistics: a ‘win–win’ for international development Jeremy Holland
PART I Participatory statistics and policy change
2 Participatory 3-dimensional modelling for policy and planning: the practice and the potential , Giacomo Rambaldi
3 Measuring urban adaptation to climate change: experiences in Kenya and Nicaragua Caroline Moser and Alfredo Stein
4 Participatory statistics, local decision-making, and national policy design: Ubudehe community planning in Rwanda  ,Ashish Shah
5 Generating numbers with local governments for decentralized health sector policy and planning in the Philippines , Rose Marie R. Nierras
6 From fragility to resilience: the role of participatory community mapping, knowledge management, and strategic planning in Sudan , Margunn Indreboe Alshaikh
Part II Who counts reality? Participatory statistics in monitoring and evaluation ,
7 Accountability downwards, count-ability upwards: quantifying empowerment outcomes from people’s own analysis in Bangladesh , Dee Jupp with Sohel Ibn Ali
8 Community groups monitoring their impact with participatory statistics in India: reflections from an international NGO Collective , Bernward Causemann, Eberhard Gohl, C. Rajathi, A. Susairaj, Ganesh Tantry and Srividhya Tantry,
9 Scoring perceptions of services in the Maldives: instant feedback and the power of increased local engagement , Nils Riemenschneider, Valentina Barca, and Jeremy Holland
10 Are we targeting the poor? Lessons with participatory statistics in Malawi , Carlos Barahona
PART III Statistics for participatory impact assessment
11 Participatory impact assessment in drought policy contexts: lessons from southern Ethiopia , Dawit Abebe and Andy Catley
12 Participatory impact assessment: the ‘Starter Pack Scheme’ and sustainable agriculture in Malawi , Elizabeth Cromwell, Patrick Kambewa, Richard Mwanza, and Rowland Chirwa with KWERA Development Centre,
13 Participatory impact assessments of farmer productivity programmes in Africa Susanne Neubert
Afterword , Robert Chambers
Practical and accessible resources
Index

Real Time Monitoring for the Most Vulnerable

.
.
Greeley, M., Lucas, H. and Chai, J. IDS Bulletin 44.2
Editor Greeley, M. Lucas, H. and Chai, J. Publisher IDS

Purchase a print copy here.

View abstracts online and subscribe to the IDS Bulletin.

Growth in the use of real time digital information for monitoring has been rapid in developing countries across all the social sectors, and in the health sector has been remarkable. Commonly these Real Time Monitoring (RTM) initiatives involve partnerships between the state, civil society, donors and the private sector. There are differences between partners in understanding of objectives,and divergence occurs due to adoption of specific technology-driven approaches and because profit-making is sometimes part of the equation.

With the swarming, especially of pilot mHealth initiatives, in many countries there is risk of chaotic disconnects, of confrontation between rights and profits, and ofoverall failure to encourage appropriate alliances to build sustainable and effective national RTM systems. What is needed is a country-led process for strengthening the quality and equity sensitivity of real-time monitoring initiatives. We propose the development of an effective learning and action agenda centred on the adoption of common standards.

IDS, commissioned and guided by UNICEF Division of Policy and Strategy, has carriedout a multi-country assessment of initiatives that collect high frequency and/or time-sensitive data on risk, vulnerability and access to services among vulnerable children and populations and on the stability and security of livelihoods affected by shocks. The study, entitled Real Time Monitoring for the Most Vulnerable (RTMMV), began with a desk review of existing RTMinitiatives and was followed up with seven country studies (Bangladesh, Brazil,Romania, Senegal, Uganda, Vietnam and Yemen) that further explored and assessed promising initiatives through field-based review and interactive stakeholder workshops. This IDS Bulletin brings together key findings from this research.”

See full list of papers on this topic at the IDS Bulletin  http://www.ids.ac.uk/publication/real-time-monitoring-for-the-most-vulnerable

Enhancing Evaluation Use: Insights from Internal Evaluation Units

Marlène Läubli Loud , John Mayne

John Mayne’s summary (especially for MandE NEWS!)

“The idea for the book was that much written about evaluation in organizations is written by outsiders such as academics and consultants. But in practice, there are those working ‘inside’ an organization who play a key role in helping shape, develop, manage and ultimately make use of the evaluation. The contributions in this book are written by such ‘insiders’. They discuss the different strategies used over a period of time to make evaluation a part of the management of the organization, successes and failures, and the lessons learned. It highlights the commissioners and managers of evaluations, those who seek evaluations that can be used to improve the strategies and operations of the organization. The aim of the book is to help organizations become more focused on using evaluation to improve policies, strategies, programming and delivery of public and communal services.

The chapters cover a wide range of organizations, from government departments in Scotland, new Zealand, Switzerland and Canada, to international organizations such as the World health organization (WHO) and the International labour organization (ILO), to supra-national organizations such as the European Commission.

The book discusses such issues as:

  • The different ways evaluation is set up—institutionalized—in government sectors / organizations, and with what results;
  • why it is so hard to make evaluation a regular aspect of good management;
  • building organizational cultures that support effective evaluation;
  • strategies that are being used to ensure better value for money and enhance utilization of evaluation findings in organizations; and
  • how organizations balance the need for timely, relevant evaluation information with the need for scientific integrity and quality.

The insider perspective and the wide scope of organizations covered is unique in discussion about evaluation in organizations.”

Where there is no single Theory of Change: The uses of Decision Tree models

Eliciting tacit and multiple Theories of Change

Rick Davies, November 2012. Unpublished paper. Available as pdf version available hereand a 4 page summary version

This paper begins by identifying situations where a theory-of-change led approach to evaluation can be difficult, if not impossible. It then introduces the idea of systematic rather than ad hoc data mining and the types of data mining approaches that exist. The rest of the paper then focuses on one data mining method known as Decision Trees, also known as Classification Trees.  The merits of Decision Tree models are spelled out and then the processes of constructing Decision Trees are explained. These include the use of computerised algorithms and ethnographic methods, using expert inquiry and more participatory processes. The relationships of Decision Tree analyses to related methods are then explored, specifically Qualitative Comparative Analysis (QCA) and Network Analysis. The final section of the paper identifies potential applications of Decision Tree analyses, covering the elicitation of tacit and multiple Theories of Change, the analysis of project generated data and the meta-analysis of data from multiple evaluations. Readers are encouraged to explore these usages.

Included in the list of merits of Decision Tree models is the possibility of differentiating what are necessary and/or sufficient causal conditions and the extent to which a cause is a contributory cause (a la Mayne)

Comments on this paper are being sought. Please post them below or email Rick Davies at rick@mande.co.uk

Separate but related:

See also: An example application of Decision Tree (predictive) models (10th April 2013)

Postscript 2013 03 20: Probably the best book on Decision Tree algorithms is:

Rokach, Lior, and Oded Z. Maimon. Data Mining with Decision Trees: Theory and Applications. World Scientific, 2008. A pdf copy is available

A Tale of Two Cultures: Qualitative and Quantitative Research in the Social Sciences

Gary Goertz & James Mahoney, 2012
Princeton University Press. Available on Amazon

Review of the book by Dan Hirschman

Excerpts from his review:

“Goertz, a political scientist, and Mahoney, a sociologist, attempt to make sense of the different cultures of research in these two camps without attempting to apply the criteria of one to the other. In other words, the goal is to illuminate difference and similarity rather than judge either approach (or, really, affiliated collection of approaches) as deficient by a universal standard.

G&M are interested in quantitative and qualitative approaches to causal explanation.

Onto the meat of the argument. G&M argue that the two cultures of quantitative and (causal) qualitative research differ in how they understand causality, how they use mathematics, how they privilege within-case vs. between-case variation, how they generate counterfactuals, and more. G&M argue, perhaps counter to our expectations, that both cultures have answers to each of these questions, and that the answers are reasonably coherent across cultures, but create tensions when researchers attempt to evaluate each others’ research: we mean different things, we emphasize different sorts of variation, and so on. Each of these differences is captured in a succinct chapter that lays out in incredible clarity the basic choices made by each culture, and how these choices aggregate up to very different models of research.

Perhaps the most counterintuitive, but arguably most rhetorically important, is the assertion that both quant and qual research are tightly linked to mathematics. For quant research, the connection is obvious: quantitative research relies heavily on probability and statistics. Causal explanation consists of statistically identifying the average effect of a treatment. For qual research, the claim is much more controversial. Rather than relying on statistics, G&M assert that qualitative research relies on logic and set theory, even if this reliance is often implicit rather than formal. G&M argue that at the core of explanation in the qualitative culture are the set theoretic/logical criteria of necessary and sufficient causes. Combinations of necessary and sufficient explanations constitute causal explanations. This search for non-trivial necessary and sufficient conditions for the appearance of an outcome shape the choices made in the qualitative culture, just as the search for significant statistical variation shapes quantitative resarch. G&M include a brief review of basic logic, and a quick overview of the fuzzy-set analysis championed by Charles Ragin. I had little prior experience with fuzzy sets (although plenty with formal logic), and I found this chapter extremely compelling and provocative. Qualitative social science works much more often with the notion of partial membership – some countries are not quite democracies, while others are completely democracies, and others are completely not democracies. This fuzzy-set approach highlight the non-linearities inherent in partial membership, as contrasted with quantitative approaches that would tend to treat “degree of democracy” as a smooth variable.”

Earlier paper by same authors available as pdf: A Tale of Two Cultures: Contrasting Quantitative and Qualitative Research
by James Mahoney, Gary Goertz. Political Analysis (2006) 14:227–249 doi:10.1093/pan/mpj017

See also these recent reviews:

See also The Logic of Process Tracing Tests in the Social Sciences by James Mahoney, Sociological Methods & Research, XX(X), 1-28 Published online 2 March 2012

RD comment: This books is recommended reading!

PS 15 February 2013: See Howard White’s new blog posting “Using the causal chain to make sense of the numbers” where he provides examples of the usefulness of simple set-theoretic analyses of the kind described by Mahoney and Goetz (e.g. in an analysis of arguments about why Gore lost to Bush in Florida)

 

On prediction, Nate Silver’s “The Signal and the Noise”

Title The Signal and the Noise: The Art and Science of Prediction
Author Nate Silver
Publisher Penguin UK, 2012
ISBN 1846147530, 9781846147531
Length 544 pages

Available on Amazon Use Google Books to read the first chapter.

RD Comment: Highly recommended reading. Reading this book reminded me of M&E data I had to examine on a large maternal and child health project in Indonesia. Rates on key indicators were presented for each of the focus districts for the year prior to the project started, then for each year during the four year project period. I remember thinking how variable these numbers were, there was nothing like a trend over time in any of the districts. Of course what I was looking at was probably largely noise, variations arising from changes in who and how the underlying data was collected and reported.This sort of situation is by no means uncommon. Most projects, if they have a base line at all, have baseline data from one year prior to when the project started. Subsequent measures of change are then, ideally, compared to that baseline. This arrangement assumes minimal noise, which is a tad optimistic. The alternative, which should not be so difficult in large bilateral projects dealing with health and education systems for example, would be to have a baseline data series covering the preceding x years, where x is at least as long as the expected duration of the proposed project.

See also Malkiel’s review in the Wall Street Journal (Telling Lies From Statistics). Malkiel is author of “A Random Walk Down Wall Street.” While a positive review overall, he charges Silver with ignoring false positives when claiming that some recent financial crises were predictable. Reviews also available in The Guardian. and LA Times. Nate Silver also writes a well known blog for the New York Times.

%d bloggers like this: