A review of evaluations of interventions related to violence against women and girls – using QCA and process tracing

In this posting I am drawing attention to a blog by Michaela Raab and Wolf Stuppert, which is exceptional (or at least unusual) in a number of respects.  The blog is called http://www.evawreview.de/

Firstly the blog is not just about the results of a review, but more importantly, about the review process, written as the review process proceeds. (I have not seen many of these kinds of blogs around, but if you know about any others please let me know)

Secondly the blog is about the use of of QCA and process tracing. There have been a number of articles about QCA in the journal Evaluation but generally speaking relatively few evaluators working with development projects know much about QCA or process tracing.

Thirdly, the blog is about the use of QCA and process tracing as a means of doing a review of findings of past evaluations of  interventions related to violence against women and girls. In other words it is another approach to undertaking a kind of systematic review, notably one which does not require throwing out 95% of the available studies because their contents don’t fit the methodology being used to do the systematic review.

Fourthly, it is about combining the use of QCA and process tracing, i.e. combining cross-case comparisons with within-case analyses. QCA can help identify causal configurations of conditions associated with specific outcomes. But once found these associations need to be examined in depth to ensure there are plausible causal mechanisms at work. That is where process tracing comes into play.

I have two hopes for the EVAWG Review blog. One is that it will provide a sufficiently transparent account of the use of QCA to enable new potential users to understand how it works, along with an appreciation of its potentials and difficulties. The other is that the dataset used in the QCA analysis will be made publicly available, ideally via the blog itself. One of the merits of QCA analyses, as published so far, is that the datasets are often published as part of the published articles, which means others can then re-analyse the same data, perhaps from a different perspective. For example, I would like to test the results of the QCA analyses by using another method for generating results which have a comparable structure (i.e. descriptions of one or more configurations of conditions associated with the presence and absence of expected outcomes). I have described this method elsewhere (Decision Tree algorithms, as used in data mining)

There are also some challenges that will face this use of QCA, which I would like to see how the blog’s authors will try to deal with. In RCTs there need to be both comparable interventions and comparable outcomes e.g. cash transfers provided to many people in some standardised manner, and a common measure of household poverty status. With QCA (and Decision Tree) analyses comparable outcomes are still needed, but not comparable interventions. These can be many and varied, as can be the wider context in which they are provided. The challenge with Raab and Stuppert’s work on VAWG is that there will be many and varied outcome measures as well and interventions. They will probably need to do multiple QCA analyses, focusing on sub-sets of evaluations within which there are one or more comparable outcomes. But by focusing in this way, they may end up with too few cases (evaluations) to produce plausible results, given the diversity of (possibly) causal conditions they will be exploring.

There is a much bigger challenge still. On re-reading the blog I realised this is not simply a kind of systematic review of the available evidence, using a different method. Instead it is a kind of meta-evaluation, where the focus is on comparison of the evaluation methods used in the population of evaluation they manage to amass. The problem of finding comparable outcomes is much bigger here. For example, on what basis will they rate or categorise evaluations as successful (e.g. valid and/or useful)? There seems to be a chicken and egg problem lurking here. Help!

PS1: I should add that this work is being funded by DFID, but the types of evaluations being reviewed is not limited to evaluations of DFID projects

PS2 2013 11 07 : I now see from the team’s latest blog posting the the common outcome of interest will be the usefullness of the evaluation. I would be interested to see how they assess usefullness , in some way that is reasonably reliable.

PS3 2014 01 07: I continue to be impressed by the team’s efforts to publicly document the progress of their work. Their Scoping Report is now available online, along with a blog commentary on progress to date (2013 01 06)

PS4 2014 03 27: The Inception Report is now available on the VAWG blog. It is well worth reading, especially the sections explaining the methodology and the evaluation team’s response to comments by the the Specialised Evaluation and Quality Assurance Service (SEQUAS, 4 March 2014) on pages 56-62, some of which are quite tough.

Some related/relevant reading:


M&E blogs: A List

  • EvalThoughts, by Amy Germuth, Durham, NC, United States, President of EvalWorks, LLC a woman-owned small evaluation and survey research consulting business in Durham, NC.
  • Evaluation and Benchmarking. “This weblog is an on-line workspace for the whole of Victorian government Benchmarking Community of Practice.”
  • M&E Blog, by…?
  • Aid on the Edge of Chaos, by Ben Ramalingam
  • Design, Monitoring and Evaluation, by LARRY DERSHEM – Tbilisi, Georgia
  • Managing for Impact: About “Strengthening Management for Impact” for MFIs
  • Genuine Evaluation: “Patricia J Rogers and E Jane Davidson blog about real, genuine, authentic, practical evaluation”
  • Practical Evaluation, by Samuel Norgah
  • AID/IT M&E Blog: “…is written by Paul Crawford, and is part of a wider AID/IT website”
  • Blog: Evaluateca: Spanish language evaluation blog maintained by Rafael Monterde Diaz. Information, news, views and critical comments on Evaluation
  • Empowerment Evaluation Blog “This is a place for exchanges and discussions about empowerment evaluation practice, theory, and current debates in the literature” Run by  Dr. David Fetterman”
  • E-valuation: “constructing a good life through the exploration of value and valuing” by Sandra Mathison,Professor, Faculty of Education, University of British Columbia
  • Intelligent Measurement. This blog is created by Richard Gaunt in London and Glenn O’Neil in Geneva and focuses on evaluation and measurement in communications, training, management and other fields.
  • Managing for Impact: Let’s talk about MandE! “Welcome to the dedicated SMIP ERIL blog on M&E for managing for impact!An IFAD funded Regional Programme, SMIP (Strengthening Management for Impact) is working with pro-poor initiatives in eastern & southern Africa to build capacities to better manage towards impact. It does so through training courses for individuals, technical support to projects & programmes, generating knowledge, providing opportunities for on-the-job-training, and policy dialogue.”
  • MCA Monitor Blog “…is a part of CGD’s MCA Monitor Initiative, which tracks the effectiveness of the US Millennium Challenge Account. Sheila Herrling, Steve Radelet and Amy Crone, key members of CGD’s MCA Monitor team, contribute regularly to the blog. We encourage you to join the discussion by commenting on any post”
  • OutcomesBlog.Org “Dr Paul Duignan on real world strategy, outcomes, evaluation & monitoring Dr Paul Duignan is a specialist in outcomes, performance management, strategic decision making, evaluation and assessing research and evidence as the basis for decision making. He has developed the area of outcomes theory and its application in Systematic Outcomes Analysis, the outcomes software DoView and the simplified approach to his work Easy Outcomes. He works at an individual, organizational and societal level to develop ways of identifying and measuring outcomes which facilitate effective action. For a bio see here.
  • Rick on the Road: “Reflections on the monitoring and evaluation of development aid projects, programmes and policies, and development of organisation’s capacity to do the same. This blog also functions as the Editorial section of the MandE NEWS website
  • The Usable Blog “A blog on “Thoughts, ideas and resources for non-profit organizations and funders about the independent sector in general and program evaluation in particular” By Eric Graig “
  • The MSC Translations blog is maintained by Rick Davies, and is part of the MandE NEWS website. The purpose of this blog is:1. To make available translations of the MSC Guide in languages other than English. 2. To solicit and share comments on the quality of these translations, so they can be improved.The original English version can be found here The ‘Most Significant Change’ (MSC) Technique: A Guide to Its Use
  • Zen and the art of monitoring & evaluation “This blog is some of the rambling thoughts of Paul Crawford, a monitoring & evaluation (M&E) consultant for international aid organisations” Paul is based in Australia.

And other lists of M&E blogs

%d bloggers like this: