How Shortcuts Cut Us Short: Cognitive Traps in Philanthropic Decision Making

Posted on 30 May, 2014 – 11:48 AM

Beer, Tanya, and Julia Coffman. 2014. “How Shortcuts Cut Us Short: Cognitive Traps in Philanthropic Decision Making”. Centre for Evaluation Innovation. Available as pdf

Found courtesy of “people-centered development” blog (michaela raab)

Introduction: “Anyone who tracks the popular business literature has come across at least one article or book, if not a half dozen, that applies the insights of cognitive science and behavioral economics to individual and organizational decision making.   These authors apply social science research to the question of why so many strategic decisions yield disappointing results, despite extensive research and planning and the availability of data about how strategies are (or are not) performing.  The diagnosis is that many of our decisions rely on mental shortcuts or “cognitive traps,” which can lead us to make uninformed or even bad decisions.   Shortcuts provide time-pressured staff with simple ways of making decisions and managing complex strategies that play  out an uncertain world. These shortcuts affect how we access information, what information  we pay attention to, what we learn, and whether and how we apply what we learn. Like all  organizations, foundations and the people who work in them are subject to these same traps.  Many foundations are attempting to make better decisions by investing in evaluation and other data collection efforts that support their strategic learning. The desire is to generate more timely and actionable data, and some foundations have even created staff positions dedicated entirely to supporting learning and the ongoing application of data for purposes of continuous improvement.  While this is a useful and positive trend, decades of research have shown that despite the best of intentions, and even when actionable data is presented at the right time, people do not automatically make good and rational decisions. Instead, we are hard-wired to fall into cognitive traps  that affect how we process (or ignore) information that could help us to make better judgments.”

Rick Davies comment: Recommended, along with the videosong by Mr Wray on cognitive bias, also available via Michaela’s blog

VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Making impact evaluation matter: Better evidence for effective policies and programmes

Posted on 27 May, 2014 – 9:16 PM

Asian Development Bank, Manila, 1-5 September 2014

The Asian Development Bank (ADB) and the International Initiative for Impact Evaluation (3ie) are hosting a major international impact evaluation conference Making Impact Evaluation Matter from 1-5 September 2014 in Manila. The call for proposals to present papers and conduct workshops at the conference is now open.

Making Impact Evaluation Matter will comprise pre-conference workshops for 2.5 days from 1-3 September 2014, and 2.5 days of the conference from 3-5 September. Major international figures in the field of impact evaluation are being invited to speak at the plenary sessions of the conference. There will be five to six streams of pre-conference workshops and up to eight streams of parallel sessions during the conference, allowing for over 150 presentations.

Proposals are now being invited for presentations on any aspect of impact evaluations and systematic reviews, including findings, methods and translation of evidence into policy. Researchers are welcome to submit proposals on the design (particularly innovative designs for difficult to evaluate interventions), implementation, findings and use of impact evaluations and systematic reviews. Policymakers and development programme managers are welcome to submit proposals on the use of impact evaluation and systematic review findings.

Parallel sessions at the conference will be organised around the following themes/sectors: (a) infrastructure (transport, energy, information and communication technology, urban development, and water), (b) climate change/ environment/ natural resources, (c) social development (health, education, gender equity, poverty and any other aspect of social development),  (d) rural development (agriculture,  food security and any other aspect of rural development),  (e)  financial inclusion, (f) institutionalisation of impact evaluation, and incorporating impact evaluation or systematic reviews into institutional appraisal and results frameworks, (g) impact evaluation of institutional and policy reform (including public management and governance), (h) impact evaluation methods, and (g) promotion of the use of evidence.

Workshop proposals are being invited on all aspects of designing, conducting and disseminating findings from impact evaluations and systematic reviews. The workshops can be at an introductory, intermediate or advanced level.  The duration of a workshop can vary from half a day to two full days.

All proposals must be submitted via email to : with email subject line ‘Proposal: presentation’ or ‘Proposal: workshop’. The proposal submission deadline is 3 July 2014.

Bursaries are available for participants from low- and middle-income countries. Employees of international organisations are however not eligible for bursaries (except the Asian Development Bank). A bursary will cover return economy airfare and hotel accommodation. All other expenses (ground transport, visa, meals outside the event) must be paid by the participant or their employer. Bursary applications must be made through the conference website: The deadline for bursary applications is 15 July 2014.

Non-sponsored participants are required to pay a fee of US$250 for participating in the conference or US$450 for participating in the pre-conference workshops as well as the conference. Those accepted to present a workshop will be exempted from the fee.

For more information on the submission of proposals for the conference, read the Call for Proposals.

For the latest updates on Making Impact Evaluation Matter, visit

Queries may be sent to
Copyright © 2014 International Initiative for Impact Evaluation (3ie), All rights reserved.
You are receiving this email because you have subscribed to the 3ie mailing list.

Our mailing address is:
International Initiative for Impact Evaluation (3ie)

2nd Floor, East Wing, ISID Complex,
Plot No. 4, Vasant Kunj Institutional Area

New Delhi 110070

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

International Energy Policies & Programmes Evaluation Conference (IEPPEC) conference 9-11 September 2014

Posted on 27 May, 2014 – 9:09 PM

– the leading event for energy policy and programme evaluators

Sharing and Accelerating the Value and Use of Monitoring, Reporting and Verification Practices.

There are a wide range of regional, national and international policies and programmes designed to achieve improved energy efficiency, and therefore reductions in GHG emissions and reductions in living costs. These are top priorities for bodies such as the EU, IEA and UN in addressing the critical issues of climate change, resource conservation and living standards.

The increasing focus on this policy area has resulted in more challenging objectives and intended outcomes for interventions, along with growing investment. But are we investing correctly?

Pioneering approaches to evaluating investments and policy decisions related to energy efficiency will be at the forefront of presentations and debate at the IEPPEC, held in Berlin between the 9th and 11th of September 2014.

The conference presents an unparalleled opportunity to bring together policy and evaluation practitioners, academics and others from around the world involved in evaluation of energy and low carbon policies and programs. Attendees will be able to debate the most effective means of assuring that both commercial and community-based approaches to improving the sustainability of our energy use and making our economies more efficient are based on common metrics that can be compared across regions and regulatory jurisdictions. The focus over the three day conference is for policy makers, program managers and evaluators to share ideas for improving the assessment of potential and actual impacts of low carbon policies and programmes, and to facilitate a deeper understanding of evaluation methods that work in practice.

The conference features:

•          Presentation of over 85 full and peer-reviewed evaluation papers by their authors

•          Four panel discussions

•          Two keynote sessions

•          A two-day poster exhibit

·               Lots of opportunity to share learning and network with other attendees

The conference is filling up fast, so to avoid disappointment, please book your place now by visiting

Additional information:

-       For the draft conference agenda, please click here

-       Refreshments, breakfasts and lunches are provided.

-       For any further information, please visit

VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Running Randomized Evaluations: A Practical Guide

Posted on 22 May, 2014 – 10:01 AM
Glennerster, Rachel, and Kudzai Takavarasha. Running Randomized Evaluations: A Practical Guide. Princeton: Princeton University Press, 2013.


This book provides a comprehensive yet accessible guide to running randomized impact evaluations of social programs. Drawing on the experience of researchers at the Abdul Latif Jameel Poverty Action Lab, which has run hundreds of such evaluations in dozens of countries throughout the world, it offers practical insights on how to use this powerful technique, especially in resource-poor environments.

This step-by-step guide explains why and when randomized evaluations are useful, in what situations they should be used, and how to prioritize different evaluation opportunities. It shows how to design and analyze studies that answer important questions while respecting the constraints of those working on and benefiting from the program being evaluated. The book gives concrete tips on issues such as improving the quality of a study despite tight budget constraints, and demonstrates how the results of randomized impact evaluations can inform policy.

With its self-contained modules, this one-of-a-kind guide is easy to navigate. It also includes invaluable references and a checklist of the common pitfalls to avoid.

Provides the most up-to-date guide to running randomized evaluations of social programs, especially in developing countries

Offers practical tips on how to complete high-quality studies in even the most challenging environments

Self-contained modules allow for easy reference and flexible teaching and learning

Comprehensive yet nontechnical

Contents pages and more (via Amazon)  &    Brief chapter summaries

The first chapter “This chapter provides an example of how a randomized evaluation can lead to large-scale change and provides a road map for an evaluation and for the rest of the book”

Book review: The impact evaluation primer you have been waiting for? Mark Goldstein, Development Impact blog. 27/11/2013

YouTube video: Book launch talk (1:21) “On 21 Nov, 2013, author of “Running Randomized Evaluations” and Executive Director of J-PAL, Rachel Glennerster, launched the new book at the World Bank. This was followed by a panel discussion with Alix Zwane, Executive Director of Evidence Action, Mary Ann Bates, Deputy Director of J-PAL North America and David Evans, Senior Economist, Office of the Chief Economist, Africa Region, World Bank, led by the Head of DIME, Arianna Legovini.”

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Working with messy data sets? Two useful and free tools

Posted on 25 April, 2014 – 5:52 PM

I have just come across two useful apps (aka software packages (aka tools)) for when you are working with someone else’s data sets and/or data sets from multiple sources and times. Or,  just your own data that was in a less than perfect state when you last left it :-)

  • OpenRefine: Initially developed by Google and now open source with its own support and development community. You can explore the characteristics of a data set, clean it in quick and comprehensive moves, transform its layout and formats, as well as reconcile and match multiple data sets. There is documentation and videos to show you how to do all this. There is also a book, which you can purchase.The wikipedia entry provides a good overview.
  • Tabula: This package allows you to extract tables of data from pdfs, a task which otherwise can be very tiresome, messy and error prone

And some other packages I have yet to explore

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

“Quality evidence for policymaking. I’ll believe it when I see the replication”

Posted on 14 April, 2014 – 12:17 PM

3ie Replication Paper 1, by Annette N Brown, Drew B Cameron, Benjamin DK Wood, March 2014. Available as pdf

“1. Introduction:  Every so often, a well-publicised replication study comes along that, for a brief period, catalyses serious discussion about the importance of replication for social science research, particularly in economics. The most recent example is the Herndon, Ash, and Pollin replication study (2013) showing that the famous and highly influential work of Reinhart and Rogoff (2010) on the relationship between debt and growth is flawed.

McCullough and McKitrick (2009) document numerous other examples from the past few decades of replication studies that expose serious weaknesses in policy influential research across several fields. The disturbing inability of Dewald et al. (1986) to replicate many of the articles in their Journal of Money, Credit and Banking experiment is probably the most well-known example of the need for more replication research in economics. Yet, replication studies are rarely published and remain the domain of graduate student exercises and the occasional controversy.

This paper takes up the case for replication research, specifically internal replication, or the reanalysis of original data to address the original evaluation question. This focus helps to demonstrate that replication is a crucial element in the production of evidence for evidence-based policymaking, especially in low-and middle-income countries.

Following an overview of the main challenges facing this type of research, the paper then presents a typology of replication approaches for addressing the challenges. The approaches include pure replication, measurement and estimation analysis (MEA), and theory of change analysis (TCA). Although the challenges presented are not new, the discussion here is meant to highlight that the call for replication is not about catching bad or irresponsible researchers. It is about addressing very real challenges in the research and publication processes and thus about producing better evidence to inform development policymaking.”

Other quotes:

“When single evaluations are influential, and any contradictory evaluations of similar interventions can be easily discounted for contextual reasons, the
minimum requirement for validating policy recommendations should be recalculating and re-estimating the measurements and findings using the original raw data to confirm the published results, or a pure replication.”

“On the bright side, there is some evidence of a correlation between public data availability and increased citation counts in the social sciences. Gleditsch (2003) finds that articles published in the Journal of Conflict Resolution that offer data in any form receive twice as many citations as comparable papers without available data (Gleditsch et al. 2003; Evanschitzky et al. 2007). ”

“Replication should be seen as part of the process for translating research findings into evidence for policy and not as a way to catch or call out researchers who, in all likelihood, have the best of intentions when conducting and submitting their research, but face understandable challenges. These challenges include the inevitability of human error, the uncontrolled nature of social science, reporting and publication bias, and the pressure to derive policy recommendations from empirical findings”

“Even in the medical sciences, the analysis of heterogeneity of outcomes, or post-trial subgroup analysis, is not accorded ‘any special epistemic status’ by the United States Food and Drug Administration rules (Deaton 2010 p.440). In the social sciences, testing for and understanding heterogeneous outcomes is crucial to policymaking. An average treatment effect demonstrated by an RCT could result from a few strongly positive outcomes and many negative outcomes, rather than from many positive outcomes, a distinction that would be important for programme design. Most RCT-based studies in development do report heterogeneous outcomes.Indeed, researchers are often required to do so by funders who want studies to have policy recommendations. As such, RCTs as practised – estimating treatment effects for groups not subject to random assignment – face the same challenges as other empirical social science studies.”

“King (2006) encourages graduate students to conduct replication studies but, in his desire to help students publish, he suggests they may leave out replication findings that support the original article and instead look for findings that contribute by changing people’s minds about something. About sensitivity analysis, King (2006 p.121) advises, ‘If it turns out that all those other changes don’t change any substantive conclusions, then leave them out or report them” Aaarrrggghhh!

Rick Davies Comment: This paper is well worth reading!

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)


Posted on 12 April, 2014 – 3:50 PM

BY DEVRA C. MOEHLER, BBC Media Action RESEARCH REPORT // ISSUE 03 // MARCH 2014 // GOVERNANCE. Available as pdf

Foreword by BBC Media Action

“This report summarises how experimental design has been used to assess the effectiveness of governance interventions and to understand the effects of the media on political opinion and behaviour. It provides an analysis of the benefits and drawbacks of experimental approaches and also highlights how field experiments can challenge the assumptions made by media support organisations about the role of the media in different countries.

The report highlights that – despite interest in the use of RCTs to assess governance outcomes – only a small number of field experiments have been conducted in the area of media, governance and democracy.

The results of these experiments are not widely known among donors or implementers. This report aims to address that gap. It shows that media initiatives have led to governance outcomes including improved accountability. However, they have also at times had unexpected adverse effects.

The studies conducted to date have been confined to a small number of countries and the research questions posed were linked to specific intervention and governance outcomes. As a result, there is a limit to what policymakers and practitioners can infer. While this report highlights an opportunity for more experimental research, it also identifies that the complexity of media development can hinder the efficacy of experimental evaluation. It cautions that low?level interventions (eg those aimed at individuals as opposed to working at a national or organisational level) best lend themselves to experimentation. This could create incentives for researchers to undertake experimental research that answers questions focused on individual change rather than wider organisational and systemic change. For example, it would be relatively easy to assess whether a training course does or does not work. Researchers can randomise the journalists that were trained and assess the uptake and implementation of skills. However, it would be much harder to assess how capacity?building efforts affect a media house, its editorial values, content, audiences and media/state relations.

Designing such experiments will be challenging. The intention of this report is to start a conversation both within our own organisation and externally. As researchers we should be prepared to discover that experimentation may not be feasible or relevant for evaluation. In order to strengthen the evidence base, practitioners, researchers and donors need to agree which research questions can and should be answered using experimental research, and, in the absence of experimental research, to agree what constitutes good evidence.

BBC Media Action welcomes feedback on this report and all publications published under our Bridging Theory and Practice Research Dissemination Series.”

Introduction 5
Chapter 1: Background on DG field experiments 7
Chapter 2: Background on media development assistance and evaluation 9
Chapter 3: Current experiments and quasi?experimental studies on media in developing countries 11
Field experiments
Quasi experiments
Chapter 4: Challenges of conducting field experiments on media development 21
Level of intervention
Complexity of intervention
Research planning under ambiguity
Chapter 5: Challenges to learning from field experiments on media development 26
Chapter 6: Solutions and opportunities 29
Research in media scarce environments
Test assumptions about media effects
To investigate influences on media
References 33

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Feminist Evaluation & Research: Theory & Practice

Posted on 11 April, 2014 – 8:45 AM



Sharon Brisolara PhD (Editor), Denise Seigart PhD (Editor), Saumitra SenGupta PhD (Editor)
Paperback: 368 pages, Publisher: The Guilford Press; Publication Date: March 28, 2014 | ISBN-10: 1462515207 | ISBN-13: 978-1462515202 | Edition: 1
Available on Amazon (though at an expensive US$43 for a paperback!)

No reviews available online as yet, but links to these will be posted here when they become available


I. Feminist Theory, Research and Evaluation

1. Feminist Theory: Its Domain and Applications, Sharon Brisolara
2. Research and Evaluation: Intersections and Divergence, Sandra Mathison
3. Researcher/Evaluator Roles and Social Justice, Elizabeth Whitmore
4. A Transformative Feminist Stance: Inclusion of Multiple Dimensions of Diversity with Gender, Donna M. Mertens
5. Feminist Evaluation for Nonfeminists, Donna Podems

II. Feminist Evaluation in Practice

6. An Explication of Evaluator Values: Framing Matters, Kathryn Sielbeck-Mathes and Rebecca Selove
7. Fostering Democracy in Angola: A Feminist-Ecological Model for Evaluation, Tristi Nichols
8. Feminist Evaluation in South Asia: Building Bridges of Theory and Practice, Katherine Hay
9. Feminist Evaluation in Latin American Contexts, Silvia Salinas Mulder and Fabiola Amariles

III. Feminist Research in Practice

10. Feminist Research and School-Based Health Care: A Three-Country Comparison, Denise Seigart
11. Feminist Research Approaches to Empowerment in Syria, Alessandra Galié
12. Feminist Research Approaches to Studying Sub-Saharan Traditional Midwives, Elaine Dietsch
Final Reflection. Feminist Social Inquiry: Relevance, Relationships, and Responsibility, Jennifer C. Greene


VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Independent Commission for Aid Impact publishes report on “How DFID Learns”

Posted on 4 April, 2014 – 9:54 AM

Terms of Reference for the review

The review itself, available here, published 4th April 2014

Selected quotes:

“Overall Assessment: Amber-Red: DFID has allocated at least £1.2 billion for research, evaluation and personnel development (2011-15). It generates considerable volumes of information, much of which, such as funded research, is publicly available. DFID itself is less good at using it and building on experience so as to turn learning into action. DFID does not clearly identify how its investment in learning links to its performance and delivering better impact. DFID has the potential to be excellent at organisational learning if its best practices become common. DFID staff learn well as individuals. They are highly motivated and DFID provides opportunities and resources for them to learn. DFID is not yet, however, managing all the elements that contribute to how it learns as a single, integrated system. DFID does not review the costs, benefits and impact of learning. Insufficient priority is placed on learning during implementation. The emphasis on results can lead to a bias to the positive. Learning from both success and failure should be systematically encouraged”.

RD Comment: The measurement of organisational learning is no easy matter, so it is likely that a lot of people would be very interested to know more about the ICAI approach. The ICAI report does define learning, as follows:

“We define learning as the extent to which DFID gains and uses knowledge  to influence its policy, strategy, plans and actions. This includes  knowledge from both its own work and that of others. Our report makes a distinction between the knowledge  DFID collects and how it is actively applied, which we term as ‘know-how’.”

Okay, and how is this assessed in practice? The key word in this definition is “influence”. Influencing is a notoriously difficult process and outcome to measure. Unfortunately the ICAI report does not provide an explanation of influence was assessed or measured. Annex 5 does show how the topic of learning was broken down into four areas:  making programme choices; creating theories of change;  choosing delivery mechanisms; and adapting and improving implementation of its activities. The report also provides some information on the sources used: “The 31 ICAI reports  considered by the team examined 140 DFID programmes across 40 countries/territories, including visits undertaken to 24 DFID country offices”….” We spoke to 92 individuals, of whom 87 were DFID staff from:  11 DFID fragile state country offices;  5 non-fragile small country offices;  16 HQ departments; and  13 advisory cadres” But how influence was measured remains unclear. ICAI could do better at modeling good practice here: i.e. transparency of evaluation methods. Perhaps then DFID could learn from how ICAI about how to assess its (DFIDs) own learning, in the future. Maybe…

Other quotes

 ”DFID is always losing and gaining knowledge. Staff are continuously leaving and joining DFID  (sometimes referred to as ‘churn’). Fragile states are particularly vulnerable to high staff turnover by UK-based staff. For instance, in Afghanistan, DFID informed us that staff turnover is at a rate of 50% per year. We are aware of one project in the Democratic Republic of Congo having had five managers in five years. DFID inform us that a staff appointment typically lasts slightly under three years.” A table that follows show an overall rate of around 10% per year

 ”DFID does not track or report on the overall impact of evaluations .The challenge of synthesising, disseminating and using knowledge from an increasing number of evaluation reports is considerable. DFID reports what evaluations are undertaken and it comments on their quality. The annual evaluation report also provides some summary findings. We would have expected DFID also to report the impact that evaluations have on what it does and what it  achieves. Such reporting would cover actions  taken in response to individual evaluations and their impact on DFID’s overall value for money and effectiveness.” It is the case that some agencies do systematcially track what happens to  the recommendations made in evaluation reports.

“DFID has, however, outsourced much of its knowledge production. Of the £1.5 billion for knowledge generation and learning, it has committed at least £1.2 billion to fund others outside DFID to produce knowledge it can use (specifically research, evaluation and PEAKS). Staff are now primarily consumers of knowledge products rather than producers of knowledge itself. We note that there are risks to this model; staff may not have the practical experience that allows them wisely to use this knowledge to make programming decisions.”

“We note that annual and project completion reviews are resources that are not fully supporting DFID’s learning. We are concerned that the lesson-learning section was removed from the  standard format of these reports and is no longer required. Lessons from these reports are not being systematically collated and that there is no central resource regularly quality assuring reviews. “

RD Comment: Paras 2.50 to 2.52 are entertaining. A UK Gov model is presented of how people learn, DFID staff are interviewed about how they think they learn, then differences between the model and what staff report are ascribed to staff lack of understanding: – “This indicates that DFID staff do not consciously  and sufficiently use the experience of their work for learning. It also indicates, within DFID, an over-identification of learning with formal training” OR… maybe it indicates that the the model was wrong and the staff were right???

This para might also raise a smile or two: “There is evidence that DFID staff are sometimes using evidence selectively. It appears this is often driven by managers requiring support for decisions. While such selective use of evidence is not the usual practice across the department, it appears to be occurring with sufficient regularity to be a concern. It is clearly unacceptable.” Golly…

VN:F [1.9.22_1171]
Rating: +3 (from 3 votes)

Two papers and one book on process tracing methods

Posted on 27 March, 2014 – 5:26 PM
  • Understanding Process Tracing, David Collier, University of California, Berkeley. PS: Political Science and Politics 44, No.4 (2011):823-30. 7 pages.
    • Abstract: “Process tracing is a fundamental tool of qualitative analysis. This method is often invoked by scholars who carry out within-case analysis based on qualitative data, yet frequently it is neither adequately understood nor rigorously applied. This deficit motivates this article, which offers a new framework for carrying out process tracing. The reformulation integrates discussions of process tracing and causal-process observations, gives greater attention to description as a key contribution, and emphasizes the causal sequence in which process-tracing observations can be situated. In the current period of major innovation in quantitative tools for causal inference, this reformulation is part of a wider, parallel effort to achieve greater systematization of qualitative methods. A key point here is that these methods can add inferential leverage that is often lacking in quantitative analysis. This article is accompanied by online teaching exercises, focused on four examples from American politics, two from comparative politics, three from international relations, and one from public health/epidemiology”
      • Great explanation of the difference between straw-in-the-wind tests, hoop tests, smoking-gun tests and doubly-decisive tests, using Sherlock Holmes story “Silver Blaze”
  • Case selection techniques in Process-tracing and the implications of taking the study of causal mechanisms seriously, Derek Beach, Rasmus Brun, 2012, 33 pages
    • Abstract: “This paper develops guidelines for each of the three variants of Process-tracing (PT): explaining outcome PT, theory-testing, and theory-building PT. Case selection strategies are not relevant when we are engaging in explaining outcome PT due to the broader conceptualization of outcomes that is a product of the different understandings of case study research (and science itself) underlying this variant of PT. Here we simply select historically important cases because they are for instance the First World War, not a ‘case of’ failed deterrence or crisis decision-making. Within the two theorycentric variants of PT, typical case selection strategies are most applicable. A typical case is one that is a member of the set of X, Y and the relevant scope conditions for the mechanism. We put forward that pathway cases, where scores on other causes are controlled for, are less relevant when we take the study of mechanisms seriously in PT, given that we are focusing our attention on how a mechanism contributes to produce Y, not on the causal effects of an X upon values of Y. We also discuss the role that deviant cases play in theory-building PT, suggesting that PT cannot stand alone, but needs to be complemented with comparative analysis of the deviant case with typical cases”
  • Process-Tracing Methods: Foundations and Guidelines, Derek Beach, Rasmus Brun Pedersen,  The University of Michigan Press (15 Dec 2012), 248 pages.
    • Description: “Process-tracing in social science is a method for studying causal mechanisms linking causes with outcomes. This enables the researcher to make strong inferences about how a cause (or set of causes) contributes to producing an outcome. Derek Beach and Rasmus Brun Pedersen introduce a refined definition of process-tracing, differentiating it into three distinct variants and explaining the applications and limitations of each. The authors develop the underlying logic of process-tracing, including how one should understand causal mechanisms and how Bayesian logic enables strong within-case inferences. They provide instructions for identifying the variant of process-tracing most appropriate for the research question at hand and a set of guidelines for each stage of the research process.” View the Table of Contents here:

PS 2014 03 28: I would also recommend a paper and book chapters by Mahoney on process tracing. Mahoney is referred to in Collier’s paper above.

  • Mahoney, James. 2012. “Mahoney, J. (2012). The Logic of Process Tracing Tests in the Social Sciences.  1-28.” Sociological Methods & Research XX(X) (March): 1–28. doi:10.1177/0049124112437709.
    • Abstract: This article discusses process tracing as a methodology for testing hypotheses in the social sciences. With process tracing tests, the analyst combines preexisting generalizations with specific observations from within a single case to make causal inferences about that case. Process tracing tests can be used to help establish that (1) an initial event or process took place, (2) a subsequent outcome also occurred, and (3) the former was a cause of the latter. The article focuses on the logic of different process tracing tests, including hoop tests, smoking gun tests, and straw in the wind tests. New criteria for judging the strength of these tests are developed using ideas concerning the relative importance of necessary and sufficient conditions. Similarities and differences between process tracing and the deductive-nomological model of explanation are explored.
  • Goertz, Gary, and James Mahoney. 2012. A Tale of Two Cultures: Qualitative and Quantitative Research in the Social Sciences. Princeton University Press. See chapter 8 on causal mechanisms and process tracing, and the sorrounding chapters 7 and 9 which make up a section on within-case analysis

PS 2014 04 03: See also

PS 2014 10 07. See also this new paper

  • Schneider, C.Q., Rohlfing, I., 2013. Combining QCA and Process Tracing in Set-Theoretic Multi-Method Research. Sociological Methods & Research 42, 559–597. doi:10.1177/0049124113481341
    • Abstract:  Set-theoretic methods and Qualitative Comparative Analysis (QCA) in particular are case-based methods. There are, however, only few guidelines on how to combine them with qualitative case studies. Contributing to the literature on multi-method research (MMR), we offer the first comprehensive elaboration of principles for the integration of QCA and case studies with a special focus on case selection. We show that QCA’s reliance on set-relational causation in terms of necessity and sufficiency has important consequences for the choice of cases. Using real world data for both crisp-set and fuzzy-set QCA, we show what typical and deviant cases are in QCA-based MMR. In addition, we demonstrate how to select cases for comparative case studies aiming to discern causal mechanisms and address the puzzles behind deviant cases. Finally, we detail the implications of modifying the set-theoretic cross-case model in the light of case-study evidence. Following the principles developed in this article should increase the inferential leverage of set-theoretic MMR.”
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)