Free Coursera online course: Qualitative Comparative Analysis (QCA)

Highly recommended! A well organised and very clear and systematic exposition. Available at: https://www.coursera.org/learn/qualitative-comparative-analysis

About this Course

Welcome to this massive open online course (MOOC) about Qualitative Comparative Analysis (QCA). Please read the points below before you start the course. This will help you prepare well for the course and attend it properly. It will also help you determine if the course offers the knowledge and skills you are looking for.

What can you do with QCA?

  • QCA is a comparative method that is mainly used in the social sciences for the assessment of cause-effect relations (i.e. causation).
  • QCA is relevant for researchers who normally work with qualitative methods and are looking for a more systematic way of comparing and assessing cases.
  • QCA is also useful for quantitative researchers who like to assess alternative (more complex) aspects of causation, such as how factors work together in producing an effect.
  • QCA can be used for the analysis of cases on all levels: macro (e.g. countries), meso (e.g. organizations) and micro (e.g. individuals).
  • QCA is mostly used for research of small- and medium-sized samples and populations (10-100 cases), but it can also be used for larger groups. Ideally, the number of cases is at least 10.
  • QCA cannot be used if you are doing an in-depth study of one case

What will you learn in this course?

  • The course is designed for people who have no or little experience with QCA.
  • After the course you will understand the methodological foundations of QCA.
  • After the course you will know how to conduct a basic QCA study by yourself.

How is this course organized?

  • The MOOC takes five weeks. The specific learning objectives and activities per week are mentioned in appendix A of the course guide. Please find the course guide under Resources in the main menu.
  • The learning objectives with regard to understanding the foundations of QCA and practically conducting a QCA study are pursued throughout the course. However, week 1 focuses more on the general analytic foundations, and weeks 2 to 5 are more about the practical aspects of a QCA study.
  • The activities of the course include watching the videos, consulting supplementary material where necessary, and doing assignments. The activities should be done in that order: first watch the videos; then consult supplementary material (if desired) for more details and examples; then do the assignments. • There are 10 assignments. Appendix A in the course guide states the estimated time needed to make the assignments and how the assignments are graded. Only assignments 1 to 6 and 8 are mandatory. These 7 mandatory assignments must be completed successfully to pass the course. • Making the assignments successfully is one condition for receiving a course certificate. Further information about receiving a course certificate can be found here: https://learner.coursera.help/hc/en-us/articles/209819053-Get-a-Course-Certificate

About the supplementary material

  • The course can be followed by watching the videos. It is not absolutely necessary yet recommended to study the supplementary reading material (as mentioned in the course guide) for further details and examples. Further, because some of the covered topics are quite technical (particularly topics in weeks 3 and 4 of the course), we provide several worked examples that supplement the videos by offering more specific illustrations and explanation. These worked examples can be found under Resources in the main menu. •
  • Note that the supplementary readings are mostly not freely available. Books have to be bought or might be available in a university library; journal publications have to be ordered online or are accessible via a university license. •
  • The textbook by Schneider and Wagemann (2012) functions as the primary reference for further information on the topics that are covered in the MOOC. Appendix A in the course guide mentions which chapters in that book can be consulted for which week of the course. •
  • The publication by Schneider and Wagemann (2012) is comprehensive and detailed, and covers almost all topics discussed in the MOOC. However, for further study, appendix A in the course guide also mentions some additional supplementary literature. •
  • Please find the full list of references for all citations (mentioned in this course guide, in the MOOC, and in the assignments) in appendix B of the course guide.

 

 

Story Completion exercises: An idea worth borrowing?

Yesterday, TheoNabben, a friend and colleague of mine and an MSC trainer, sent me a link to a webpage full of information about a method called Story Completion: https://www.psych.auckland.ac.nz/en/about/story-completion.html 

Background

Story Completion is a qualitative research method first developed in the field of psychology but subsequently taken up primarily by feminist researchers. It was originally of interest as a method of enquiring about psychological meanings particularly those that people could not or did not want to explicitly communicate. However, it was subsequently re-conceptualised as a valuable method of accessing and investigating social discourses. These two different perspectives have been described as essentialist versus social constructionist.

Story completion is a useful tool for accessing meaning-making around a particular topic of interest. It is particularly useful for exploring (dominant) assumptions about a topic. This type of research can be framed as exploring either perceptions and understandings or social/discursive constructions of a topic.

This 2019 paper by Clarke et al. provides a good overview and is my main source of comments and explanations on this page

How It Works

The researcher provides the participant with the beginning of the story, called the stem. Typically this is one sentence long but can be longer. For example…

“Catherine has decided that she needs to lose weight. Full of enthusiasm, and in order to prevent her from changing her mind, she is telling her friends in the pub about her plans and motivations.”

The participant is then asked by the researcher to extend that story, by explaining – usually in writing – what happens next. Typically this storyline is about a third person (e.g. a Catherine), not about the participant themselves.

In practice, this form of enquiry can take various forms as suggested by Figure 1 below.

Figure 1: Four different versions of a Story Completion inquiry

Analysis of responses can be done in two ways: (a) horizontally – comparisons across respondents, (B) vertically – changes over time within the narratives.

Here is a good how-to-do-it  introduction to Story Completion: http://blogs.brighton.ac.uk/sasspsychlab/2017/10/15/story-completion/ 

And here is an annotated bibliography that looks very useful: https://cdn.auckland.ac.nz/assets/psych/about/our-research/documents/Resources%20for%20qualitative%20story%20completion%20(July%202019).pdf

How it could be useful for monitoring and evaluation purposes

Story Completion exercises could be a good way of identifying different stakeholders views of the possible consequences of an intervention. Variations in the text of the story stem could allow the exploration of consequences that might vary across gender or other social differences. Variations in the respondents being interviewed would allow exploration of differences in perspective on how a specific intervention might have consequences.

Of course, these responses will need interpretation and would benefit from further questioning. Participatory processes could be designed to enable this type of follow-up. Rather than simply relying on third parties (e.g. researchers), as informed as they might be.

Variations could be developed where literacy is likely to be a problem. Voice recordings could be made instead, and small groups could be encouraged to collectively develop a response to the stem. There would seem to be plenty of room for creativity here.

Postscript

There is a considerable overlap between the Story Completion method and how the ParEvo participatory scenario planning process works.

The commonality of the two methods is that they are both narrative-based. They both start with a story stem/seed designed by the researcher/Facilitator. Then the respondent/participants add an extension onto that story stem describing what happens next. Both methods are future-orientated and largely other-orientated, in other words not about the storyteller themselves. And both processes pay quite a lot of attention after the narratives are developed, to how those narratives can be analysed and compared.

Now for some key differences. With ParEvo the process of narrative development involves multiple people rather than one person. This means multiple alternative storylines can develop, some of which die out, some which continue, and some of which branch into multiple variants. The other difference, already implied, is that the ParEvo process goes through multiple iterations, where is the Story Completion process has only one iteration. So in the case of ParEvo the storylines accumulate multiple segments of text, with a new segment added with each iteration.  Content analysis can be carried out with the results of Story Completion and ParEvo exercises. But in the case of ParEvo it is also possible to analyse the structure of people’s participation and how it relates to the contents of the storylines.

 

Participatory approaches to the development of a Theory of Change: Beginnings of a list

Background

There have been quite a few generic guidance documents written on the use of Theories of Change. These are not the main focus of this list. Nevertheless, here are those I have come across:

Klein, M (2018) Theory of Change Quality Audit, at https://changeroo.com/toc-academy/posts/expert-toc-quality-audit-academy

UNDG (2017) Theory of Change – UNDAF Companion Guidance, UNDG.  https://undg.org/wp-content/uploads/2017/06/Theory-of-Change-UNDAF-Companion-Pieces.pdf

Van Es M, Guijt I and Vogel I (2015) Theory of Change Thinking in Practice. HIVOS. http://www.theoryofchange.nl/sites/default/files/resource/hivos_Theory of Change_guidelines_final_nov_2015.pdf.

Valters C (2015) Theories of Change: Time for a radical approach to learning in development. ODI. https://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/9835.pdf.

Rogers P (2014) Theory of Change. Methodological Briefs Impact Evaluation No. 2. UNICEF.
http://devinfolive.info/impact_evaluation/img/downloads/Theory_of_Change_ENG.pdf.

Vogel I (2012) Review of the use of ‘Theory of Change’ in international development. Review Report for DFID.
http://www.dfid.gov.uk/r4d/pdf/outputs/mis_spc/DFID_Theory of Change_Review_VogelV7.pdf

Vogel I (2012) ESPA guide to working with Theory of Change for research projects. LTS/ITAD for ESPA. http://www.espa.ac.uk/files/espa/ESPA-Theory-of-Change-Manual-FINAL.pdf

Stein, D., & Valters, C. (2012). Understanding Theory in Change in International Development. The Asia Foundation. http://www2.lse.ac.uk/internationalDevelopment/research/JSRP/downloads/JSRP1.SteinValters.pdf

James, C. (2011, September). Theory of Change Review. A Report Commissioned by Comic Relief. http://www.theoryofchange.org/pdf/James_Theory of Change.pdf

UNDAF (UNDG, 2017).

Participatory approaches to ToC construction

Burbaugh B, Seibel M and Archibald T (2017) Using a Participatory Approach to Investigate a Leadership Program’s Theory of Change. Journal of Leadership Education 16(1): 192–205.

Katherine Austin-Evelyn and Erin Williams  (2016) Mapping Change for Girls, One Post-It Note at a Time. Blog posting

Breuer E, Lee L, De Silva M, et al. (2016) Using theory of change to design and evaluate public health interventions: a systematic review. Implementation science: IS 11: 63. DOI: 10.1186/s13012-016-0422-6. Recommended

Breuer E, De Silva MJ, Fekadu A, et al. (2014) Using workshops to develop theories of change in five low and middle-income countries: lessons from the programme for improving mental health care (PRIME). International Journal of Mental Health Systems 8: 15. DOI: 10.1186/1752-4458-8-15.

De Silva MJ, Breuer E, Lee L, et al. (2014) Theory of Change: a theory-driven approach to enhance the Medical Research Council’s framework for complex interventions. Trials 15: 267. DOI: 10.1186/1745-6215-15-267.

Participatory Modelling: Beginnings of a list

What is Participatory Modelling?

Gray et al (2018) “The field of PM lies at the intersection of participatory approaches to planning, computational modeling, and environmental modeling”

Wikipedia: “Participatory modeling is a purposeful learning process for action that engages the implicit and explicit knowledge of stakeholders to create formalized and shared representation(s) of reality. In this process, the participants co-formulate the problem and use modeling practices to aid in the description, solution, and decision-making actions of the group. Participatory modeling is often used in environmental and resource management contexts. It can be described as engaging non-scientists in the scientific process. The participants structure the problem, describe the system, create a computer model of the system, use the model to test policy interventions, and propose one or more solutions. Participatory modeling is often used in natural resources management, such as forests or water.

There are numerous benefits from this type of modeling, including a high degree of ownership and motivation towards change for the people involved in the modeling process. There are two approaches which provide highly different goals for the modeling; continuous modeling and conference modeling.

Recent references
  • Olazabal M, Neumann MB, Foudi S, et al. (n.d.) Transparency and Reproducibility in Participatory Systems Modelling: the Case of Fuzzy Cognitive Mapping. Systems Research and Behavioral Science 0(0). DOI: 10.1002/sres.2519.
  • Gray S, Voinov A, Paolisso M, et al. (2018) Purpose, processes, partnerships, and products: four Ps to advance participatory socio-environmental modeling. Ecological Applications 28(1): 46–61. DOI: 10.1002/eap.1627.
  • Hedelin B, Evers M, Alkan-Olsson J, et al. (2017) Participatory modelling for sustainable development: Key issues derived from five cases of natural resource and disaster risk management. Environmental Science & Policy 76: 185–196. DOI: 10.1016/j.envsci.2017.07.001.
  • Basco-Carrera L, Warren A, van Beek E, et al. (2017) Collaborative modelling or participatory modeling? A framework for water resources management. Environmental Modelling & Software 91: 95–110. DOI: 10.1016/j.envsoft.2017.01.014.
  • Eker S, Zimmermann N, Carnohan S, et al. (2017) Participatory system dynamics modelling for housing, energy and wellbeing interactions. Building Research & Information 0(0): 1–17. DOI: 10.1080/09613218.2017.1362919.
  • Voinov A, Kolagani N, McCall MK, et al. (2016) Modelling with stakeholders – Next generation. Environmental Modelling and Software 77: 196220. DOI: 10.1016/j.envsoft.2015.11.016.
  • Voinov AA (2010) Participatory Modeling: What, Why, How? University of Twente. Available at:  http://www2.econ.iastate.edu/tesfatsi/ParticipatoryModelingWhatWhyHow.AVoinov.March2010.pdf 

See also Will Allen’s list of papers on participatory modelling

Dealing with missing data: A list

In this post “missing data” does not mean absence of whole categories of data, which is a common enough problem, but missing data values within a given data set.

While this is a common problem in almost all spheres of research/evaluation it seems particularly common in more qualitative and participatory inquiry, where the same questions may not be asked of all participants/respondents. It is also likely to be a problem when data is extracted from documentary source produced by different parties e.g. project completion reports.

Some types of strategies (from Analytics Vidhya):

  1. Deletion:
    1. Listwise deletion: Of all cases with missing data
    2. Pairwise deletion: : An analysis is carried out with all cases in which the variable of interest is present. The sub-set of cases used will vary according to the sub-set of variables which are the focus of each analysis.
  2. Substitution
    1. Mean/ Mode/ Median Imputation: replacing the missing data for a given attribute by the mean or median (quantitative attribute) or mode (qualitative attribute) of all known values of that variable. Two variants:
      1. Generalized: Done for all cases
      2. Similar case: calculated separately for different sub-groups e.g. men versus women
    2. K Nearest Neighbour (KNN) imputation: The missing values of an attribute are imputed using those found in other cases with the most similar other attributes (where k = number of other attributes being examined).
    3. Prediction model: Using a sub-set of cases with no missing values, a model is developed that best predicts the presence of the attribute of interest. This is then applied to predict the missing values in the sub-set of cases with the missing values. Another variant, for continuous data:
      1. Regression Substitution: Using multiple-regression analysis to estimate a missing value.
  3. Error estimation (tbc)

References (please help me extend this list)

Note: I would like this list to focus on easily usable references i.e. those not requiring substantial knowledge of statistics and/or the subject of missing data

 

Overview: An open source document clustering and search tool

Overview is an open-source tool originally designed to help journalists find stories in large numbers of documents, by automatically sorting them according to topic and providing a fast visualization and reading interface. It’s also used for qualitative research, social media conversation analysis, legal document review, digital humanities, and more. Overview does at least three things really well.

  • Find what you don’t even know to look for.
  • See broad trends or patterns across many documents.
  • Make exhaustive manual reading faster, when all else fails.

Search is a wonderful tool when you know what you’re trying to find — and Overview includes advanced search features. It’s less useful when you start with a hunch or an anonymous tip. Or there might be many different ways to phrase what you’re looking for, or you could be struggling with poor quality material and OCR error. By automatically sorting documents by topic, Overview gives you a fast way to see what you have .

In other cases you’re interested in broad patterns. Overview’s topic tree shows the structure of your document set at a glance, and you can tag entire folders at once to label documents according to your own category names. Then you can export those tags to create visualizations.

Rick Davies Comment: This service could be quite useful in various ways, including clustering sets of Most Significant Change (MSC) stories, or micro-narratives form SenseMaker type exercises, or collections of Twitter tweets found via a key word search. For those interested in the details, and preferring transparency to apparent magic, Overview uses the k-means clustering algorithm, which is explained broadly here. One caveat, the processing of documents can take some time, so you may want to pop out for a cup of coffee while waiting. For those into algorithms, here is a healthy critique of careless use of k-means clustering i.e. not paying attention to when its assumptions about the structure of the underlying data are inappropriate

It is the combination of searching using keywords, and the automatic clustering that seems to be the most useful, to me…so far. Another good feature is the ability to label clusters of interest with one or more tags

I have uploaded 69 blog postings from my Rick on the Road blog. If you want to see how Overview hierarchically clusters these documents let me know, I then will enter your email, which will then let Overview give you access. It seems, so far, that there is no simple way of sharing access (but I am inquiring).

Research on the use and influence of evaluations: The beginnings of a list

This is intended to be the start of an accumulating list of references on the subject of evaluation use. Particularly papers that review specific sets or examples of evaluations, rather than talk about the issues in a less grounded way

2016

2015

2014

2012

2009

2000

1997

1986

Related docs

  • Improving the use of monitoring & evaluation processes and findings. Conference Report, Centre for Development Innovation, Wageningen, June 2014  
    • “An existing framework of four areas of factors influencing use …:
      1. Quality factors, relating to the quality of the evaluation. These factors include the evaluation design, planning, approach, timing, dissemination and the quality and credibility of the evidence.
      2. Relational factors: personal and interpersonal; role and influence of evaluation unit; networks,communities of practice.
      3. Organisational factors: culture, structure and knowledge management
      4. External factors, that affect utilisation in ways beyond the influence of the primary stakeholders and the evaluation process.
  • Bibliography provided by ODI, in response to this post Jan 2015. Includes all ODI publications found using keyword “evaluation” – a bit too broad, but still useful
  • ITIG- Utilization of Evaluations- Bibliography. International Development  Evaluation Association. Produced circa 2011/12

Livelihoods Monitoring and Evaluation: A Rapid Desk Based Study

by Kath Pasteur, 2014, 24 pages. Found here: http://www.evidenceondemand.info/livelihoods-monitoring-and-evaluation-a-rapid-desk-based-study

Abstract: “This report is the outcome of a rapid desk study to identify and collate the current state of evidence and best practice for monitoring and evaluating programmes that aim to have a livelihoods impact. The study identifies tried and tested approaches and indicators that can be applied across a range of livelihoods programming. The main focus of the report is an annotated bibliography of literature sources relevant to the theme. The narrative report highlights key themes and examples from the literature relating to methods and indicators. This collection of resources is intended to form the starting point for a more thorough organisation and analysis of material for the final formation of a Topic Guide on Livelihoods Indicators. This report has been produced by Practical Action Consulting for Evidence on Demand with the assistance of the UK Department for International Development (DFID) contracted through the Climate, Environment, Infrastructure and Livelihoods Professional Evidence and Applied Knowledge Services (CEIL PEAKS) programme, jointly managed by HTSPE Limited and IMC Worldwide Limited”

Full reference: Pasteur, K. Livelihoods monitoring and evaluation: A rapid desk based study. Evidence on Demand, UK (2014) 24 pp. [DOI: http://dx.doi.org/10.12774/eod_hd.feb2014.pasteur]

Process tracing: A list

  • Understanding Process Tracing, David Collier, University of California, Berkeley. PS: Political Science and Politics 44, No.4 (2011):823-30. 7 pages.
    • Abstract: “Process tracing is a fundamental tool of qualitative analysis. This method is often invoked by scholars who carry out within-case analysis based on qualitative data, yet frequently it is neither adequately understood nor rigorously applied. This deficit motivates this article, which offers a new framework for carrying out process tracing. The reformulation integrates discussions of process tracing and causal-process observations, gives greater attention to description as a key contribution, and emphasizes the causal sequence in which process-tracing observations can be situated. In the current period of major innovation in quantitative tools for causal inference, this reformulation is part of a wider, parallel effort to achieve greater systematization of qualitative methods. A key point here is that these methods can add inferential leverage that is often lacking in quantitative analysis. This article is accompanied by online teaching exercises, focused on four examples from American politics, two from comparative politics, three from international relations, and one from public health/epidemiology”
      • Great explanation of the difference between straw-in-the-wind tests, hoop tests, smoking-gun tests and doubly-decisive tests, using Sherlock Holmes story “Silver Blaze”
  • Case selection techniques in Process-tracing and the implications of taking the study of causal mechanisms seriously, Derek Beach, Rasmus Brun, 2012, 33 pages
    • Abstract: “This paper develops guidelines for each of the three variants of Process-tracing (PT): explaining outcome PT, theory-testing, and theory-building PT. Case selection strategies are not relevant when we are engaging in explaining outcome PT due to the broader conceptualization of outcomes that is a product of the different understandings of case study research (and science itself) underlying this variant of PT. Here we simply select historically important cases because they are for instance the First World War, not a ‘case of’ failed deterrence or crisis decision-making. Within the two theorycentric variants of PT, typical case selection strategies are most applicable. A typical case is one that is a member of the set of X, Y and the relevant scope conditions for the mechanism. We put forward that pathway cases, where scores on other causes are controlled for, are less relevant when we take the study of mechanisms seriously in PT, given that we are focusing our attention on how a mechanism contributes to produce Y, not on the causal effects of an X upon values of Y. We also discuss the role that deviant cases play in theory-building PT, suggesting that PT cannot stand alone, but needs to be complemented with comparative analysis of the deviant case with typical cases”
  • Process-Tracing Methods: Foundations and Guidelines, Derek Beach, Rasmus Brun Pedersen,  The University of Michigan Press (15 Dec 2012), 248 pages.
    • Description: “Process-tracing in social science is a method for studying causal mechanisms linking causes with outcomes. This enables the researcher to make strong inferences about how a cause (or set of causes) contributes to producing an outcome. Derek Beach and Rasmus Brun Pedersen introduce a refined definition of process-tracing, differentiating it into three distinct variants and explaining the applications and limitations of each. The authors develop the underlying logic of process-tracing, including how one should understand causal mechanisms and how Bayesian logic enables strong within-case inferences. They provide instructions for identifying the variant of process-tracing most appropriate for the research question at hand and a set of guidelines for each stage of the research process.” View the Table of Contents here:
  • Mahoney, James. 2012. “Mahoney, J. (2012). The Logic of Process Tracing Tests in the Social Sciences.  1-28.” Sociological Methods & Research XX(X) (March): 1–28. doi:10.1177/0049124112437709.
    • Abstract: This article discusses process tracing as a methodology for testing hypotheses in the social sciences. With process tracing tests, the analyst combines preexisting generalizations with specific observations from within a single case to make causal inferences about that case. Process tracing tests can be used to help establish that (1) an initial event or process took place, (2) a subsequent outcome also occurred, and (3) the former was a cause of the latter. The article focuses on the logic of different process tracing tests, including hoop tests, smoking gun tests, and straw in the wind tests. New criteria for judging the strength of these tests are developed using ideas concerning the relative importance of necessary and sufficient conditions. Similarities and differences between process tracing and the deductive-nomological model of explanation are explored.
  • Goertz, Gary, and James Mahoney. 2012. A Tale of Two Cultures: Qualitative and Quantitative Research in the Social Sciences. Princeton University Press. See chapter 8 on causal mechanisms and process tracing, and the surrounding chapters 7 and 9 which make up a section on within-case analysis
  • Hutchings, Claire. ‘Process Tracing: Draft Protocol’. Oxfam, 2013. Plus an associated blog posting and an Effectiveness Review which made use of the protocol
  • Schneider, C.Q., Rohlfing, I., 2013. Combining QCA and Process Tracing in Set-Theoretic Multi-Method Research. Sociological Methods & Research 42, 559–597. doi:10.1177/0049124113481341
    • Abstract:  Set-theoretic methods and Qualitative Comparative Analysis (QCA) in particular are case-based methods. There are, however, only few guidelines on how to combine them with qualitative case studies. Contributing to the literature on multi-method research (MMR), we offer the first comprehensive elaboration of principles for the integration of QCA and case studies with a special focus on case selection. We show that QCA’s reliance on set-relational causation in terms of necessity and sufficiency has important consequences for the choice of cases. Using real world data for both crisp-set and fuzzy-set QCA, we show what typical and deviant cases are in QCA-based MMR. In addition, we demonstrate how to select cases for comparative case studies aiming to discern causal mechanisms and address the puzzles behind deviant cases. Finally, we detail the implications of modifying the set-theoretic cross-case model in the light of case-study evidence. Following the principles developed in this article should increase the inferential leverage of set-theoretic MMR.”
  • Rohlfing, Ingo. “Comparative Hypothesis Testing Via Process Tracing.” Sociological Methods & Research 43, no. 4 (November 1, 2014): 606–42. doi:10.1177/0049124113503142.
    • Abstract: Causal inference via process tracing has received increasing attention during recent years. A 2 × 2 typology of hypothesis tests takes a central place in this debate. A discussion of the typology demonstrates that its role for causal inference can be improved further in three respects. First, the aim of this article is to formulate case selection principles for each of the four tests. Second, in focusing on the dimension of uniqueness of the 2 × 2 typology, I show that it is important to distinguish between theoretical and empirical uniqueness when choosing cases and generating inferences via process tracing. Third, I demonstrate that the standard reading of the so-called doubly decisive test is misleading. It conflates unique implications of a hypothesis with contradictory implications between one hypothesis and another. In order to remedy the current ambiguity of the dimension of uniqueness, I propose an expanded typology of hypothesis tests that is constituted by three dimensions.
  • Bennett, A., Checkel, J. (Eds.), 2014Process Tracing: From Metaphor to Analytic Tool. Cambridge University Press
  • Befani, Barbara, and John Mayne. “Process Tracing and Contribution Analysis: A Combined Approach to Generative Causal Inference for Impact Evaluation.IDS Bulletin 45, no. 6 (2014): 17–36. doi:10.1111/1759-5436.12110.
    • Abstract: This article proposes a combination of a popular evaluation approach, contribution analysis (CA), with an emerging method for causal inference, process tracing (PT). Both are grounded in generative causality and take a probabilistic approach to the interpretation of evidence. The combined approach is tested on the evaluation of the contribution of a teaching programme to the improvement of school performance of girls, and is shown to be preferable to either CA or PT alone. The proposed procedure shows that established Bayesian principles and PT tests, based on both science and common sense, can be applied to assess the strength of qualitative and quali-quantitative observations and evidence, collected within an overarching CA framework; thus shifting the focus of impact evaluation from ‘assessing impact’ to ‘assessing confidence’ (about impact).

  • Punton, M., Welle, K., 2015. Straws-in-the-wind, Hoops and Smoking Guns: What can Process Tracing Offer to Impact Evaluation?
    • Abstract:  “This CDI Practice Paper by Melanie Punton and Katharina Welle explains the methodological and theoretical foundations of process tracing, and discusses its potential application in international development impact evaluations. It draws on two early applications of process tracing for assessing impact in international development interventions: Oxfam Great Britain (GB)’s contribution to advancing universal health care in Ghana, and the impact of the Hunger and Nutrition Commitment Index (HANCI) on policy change in Tanzania. In a companion to this paper, Practice Paper 10 Annex describes the main steps in applying process tracing and provides some examples of how these steps might be applied in practice.”
  • Weller, N., & Barnes, J. (2016). Pathway Analysis and the search for causal mechanisms. Sociological Methods & Research, 45(3), 424–457.
    • Abstract: The study of causal mechanisms interests scholars across the social sciences. Case studies can be a valuable tool in developing knowledge and hypotheses about how causal mechanisms function. The usefulness of case studies in the search for causal mechanisms depends on effective case selection, and there are few existing guidelines for selecting cases to study causal mechanisms. We outline a general approach for selecting cases for pathway analysis: a mode of qualitative research that is part of a mixed-method research agenda, which seeks to (1) understand the mechanisms or links underlying an association between some explanatory variable, X1, and an outcome, Y, in particular cases and (2) generate insights from these cases about mechanisms in the unstudied population of cases featuring the X1/Y relationship. The gist of our approach is that researchers should choose cases for comparison in light of two criteria. The first criterion is the expected relationship between X1/Y, which is the degree to which cases are expected to feature the relationship of interest
      between X1 and Y. The second criterion is variation in case characteristics or the extent to which the cases are likely to feature differences in characteristics that can facilitate hypothesis generation. We demonstrate how to apply our approach and compare it to a leading example of pathway analysis in the so-called resource curse literature, a prominent example of a correlation featuring a nonlinear relationship and multiple causal mechanisms.
  • Befani, Barbara, and Gavin Stedman-Bryce. “Process Tracing and Bayesian Updating for Impact Evaluation.” Evaluation, June 24, 2016, 1356389016654584. doi:10.1177/1356389016654584.
    • Abstract: Commissioners of impact evaluation often place great emphasis on assessing the contribution made by a particular intervention in achieving one or more outcomes, commonly referred to as a ‘contribution claim’. Current theory-based approaches fail to provide evaluators with guidance on how to collect data and assess how strongly or weakly such data support contribution claims. This article presents a rigorous quali-quantitative approach to establish the validity of contribution claims in impact evaluation, with explicit criteria to guide evaluators in data collection and in measuring confidence in their findings. Coined ‘Contribution Tracing’, the approach is inspired by the principles of Process Tracing and Bayesian Updating, and attempts to make these accessible, relevant and applicable by evaluators. The Contribution Tracing approach, aided by a symbolic ‘contribution trial’, adds value to impact evaluation theory-based approaches by: reducing confirmation bias; improving the conceptual clarity and precision of theories of change; providing more transparency and predictability to data-collection efforts; and ultimately increasing the internal validity and credibility of evaluation findings, namely of qualitative statements. The approach is demonstrated in the impact evaluation of the Universal Health Care campaign, an advocacy campaign aimed at influencing health policy in Ghana.

A review of evaluations of interventions related to violence against women and girls – using QCA and process tracing

In this posting I am drawing attention to a blog by Michaela Raab and Wolf Stuppert, which is exceptional (or at least unusual) in a number of respects.  The blog is called http://www.evawreview.de/

Firstly the blog is not just about the results of a review, but more importantly, about the review process, written as the review process proceeds. (I have not seen many of these kinds of blogs around, but if you know about any others please let me know)

Secondly the blog is about the use of of QCA and process tracing. There have been a number of articles about QCA in the journal Evaluation but generally speaking relatively few evaluators working with development projects know much about QCA or process tracing.

Thirdly, the blog is about the use of QCA and process tracing as a means of doing a review of findings of past evaluations of  interventions related to violence against women and girls. In other words it is another approach to undertaking a kind of systematic review, notably one which does not require throwing out 95% of the available studies because their contents don’t fit the methodology being used to do the systematic review.

Fourthly, it is about combining the use of QCA and process tracing, i.e. combining cross-case comparisons with within-case analyses. QCA can help identify causal configurations of conditions associated with specific outcomes. But once found these associations need to be examined in depth to ensure there are plausible causal mechanisms at work. That is where process tracing comes into play.

I have two hopes for the EVAWG Review blog. One is that it will provide a sufficiently transparent account of the use of QCA to enable new potential users to understand how it works, along with an appreciation of its potentials and difficulties. The other is that the dataset used in the QCA analysis will be made publicly available, ideally via the blog itself. One of the merits of QCA analyses, as published so far, is that the datasets are often published as part of the published articles, which means others can then re-analyse the same data, perhaps from a different perspective. For example, I would like to test the results of the QCA analyses by using another method for generating results which have a comparable structure (i.e. descriptions of one or more configurations of conditions associated with the presence and absence of expected outcomes). I have described this method elsewhere (Decision Tree algorithms, as used in data mining)

There are also some challenges that will face this use of QCA, which I would like to see how the blog’s authors will try to deal with. In RCTs there need to be both comparable interventions and comparable outcomes e.g. cash transfers provided to many people in some standardised manner, and a common measure of household poverty status. With QCA (and Decision Tree) analyses comparable outcomes are still needed, but not comparable interventions. These can be many and varied, as can be the wider context in which they are provided. The challenge with Raab and Stuppert’s work on VAWG is that there will be many and varied outcome measures as well and interventions. They will probably need to do multiple QCA analyses, focusing on sub-sets of evaluations within which there are one or more comparable outcomes. But by focusing in this way, they may end up with too few cases (evaluations) to produce plausible results, given the diversity of (possibly) causal conditions they will be exploring.

There is a much bigger challenge still. On re-reading the blog I realised this is not simply a kind of systematic review of the available evidence, using a different method. Instead it is a kind of meta-evaluation, where the focus is on comparison of the evaluation methods used in the population of evaluation they manage to amass. The problem of finding comparable outcomes is much bigger here. For example, on what basis will they rate or categorise evaluations as successful (e.g. valid and/or useful)? There seems to be a chicken and egg problem lurking here. Help!

PS1: I should add that this work is being funded by DFID, but the types of evaluations being reviewed is not limited to evaluations of DFID projects

PS2 2013 11 07 : I now see from the team’s latest blog posting the the common outcome of interest will be the usefullness of the evaluation. I would be interested to see how they assess usefullness , in some way that is reasonably reliable.

PS3 2014 01 07: I continue to be impressed by the team’s efforts to publicly document the progress of their work. Their Scoping Report is now available online, along with a blog commentary on progress to date (2013 01 06)

PS4 2014 03 27: The Inception Report is now available on the VAWG blog. It is well worth reading, especially the sections explaining the methodology and the evaluation team’s response to comments by the the Specialised Evaluation and Quality Assurance Service (SEQUAS, 4 March 2014) on pages 56-62, some of which are quite tough.

Some related/relevant reading: