Two useful papers on the practicalities of doing Realist Evaluation

1. Punton, M., Vogel, I., & Lloyd, R. (2016, April). Reflections from a Realist Evaluation in Progress: Scaling Ladders and Stitching Theory. IDS. Available here 

2. Manzano, A. (2016). The craft of interviewing in realist evaluation. Evaluation, 22(3), 342–360. Available here.

Rick Davies comment: I have listed these two papers here because I think they both make useful contributions towards enabling people (myself and others) to understand how to actually do a Realist Evaluation. My previous reading of comments that Realist Evaluation (RE) is “an approach” or a “a way of thinking” rather than a method” has not been encouraging. Both of these papers provide practically relevant details. The Punton et al paper includes comments about the difficulties encountered and where they deviated from current or suggested practice and why so, which I found refreshing.

I have listed some issues of interest to me below, with reflections on the contributions of the two paper.

Interviews as sources of knowledge

Interviews of stakeholders about if, how and why a program works, are a key resource in most REs (Punton et al). Respondents views are both sources of theories and sources of evidence for and against those theories, and there seems to be potential for mixing these up in  way that the process of theory elicitation and testing becomes less explicit than it should be. Punton et al have partially addressed this by coding the status of views about reported outcomes as “observed”, anticipated” or “implied”. The same approach could be taken with recording of respondents’ views on the context and mechanisms involved.

Manzano makes a number of useful distinctions between RE and constructivist interview approaches. But one distinction that is made seems unrealistic, so to speak. “…data collected through qualitative interviews are not considered constructions. Data are instead considered “evidence for real phenomena and processes”. But respondents themselves, as shown in some quotes in the paper, will indicate that on some issues they are not sure, they have forgotten or they are guessing. What is real here is that respondents are themselves making their best efforts to construct some sense out of a situation.So the issue of careful coding of the status of respondents’ views, as to whether they are theories or not, and if observations, what status these have, is important.

How many people to interview

According to Manzano there is no simple answer to this question, but is clear that in the early stages of a RE the emphasis is on capturing a diversity of stakeholder views in such a way that the diversity of possibly CMOs might be identified. So I was worried that the Punton et al paper referred to interviews being conducted in only 5 of the 11 countries where the BCURE program was operating. If some contextual differences are more influential than others, then I would guess that  cross-country differences  would be one such type of difference. I know in all evaluations resources are in limited supplies and choices need to be made. But this one puzzled me.

[Later edit] I think part of the problem here is the lack of what could be called an explicit search strategy. The problem is that the number of useful CMOs that could be identified is potentially equal to the number of people effected by a program, or perhaps even a multiple of that if they encountered a program on multiple occasions. Do you try to identify all of these, or do you stop when the number of new CMOs starts to drop off, per extra x number of interviewees? Each of these is a kind of search strategy. One pragmatic way of limiting the number of possible CMOs to investigate might be to decide in advance on just how dis-aggregated an analysis of “what works for whom in what circumstances” should be. To do this one would need to be clear on what the unit of analysis should be. I partially agree and disagree with Manzano’s point that “the unit of analysis is not the person, but the events and processes around them, every unique program participant uncovers a collection of micro-events and processes, each of which can be explored in multiple ways to test theories”. From my point of view, the person, especially the intended beneficiaries, should be the central focus, and selected events and process are relevant in as much as they impinge on these peoples lives.  I would re-edit the phrase above as follows “what works for whom in what circumstances”

If the unit of analysis is some category of persons then my guess is that the smallest unit of analysis would be a group of people probably defined by a combination of geographic dimensions (e.g. administrative units) and demographic dimensions (e.g. gender, religion, ethnicity of people to be affected). The minimal number of potential differences between these units of analysis seems to be N-1 (where N = number of identifiable groups) as shown by this fictional example below, where each green node is a point of difference between groups of people. Each of these points of difference could be explained by a particular CMO.
cmo tREE 2

 I have one reservation about this approach. It requires some form of prior knowledge about the groupings that matter. That is not unreasonable when evaluating a program that had an explicit goal about reaching particular people. But I am wondering if there is also a more inductive search option. [To be continued…perhaps]

How to interview

Both papers had lots of useful advice on how to interview, from a RE perspective. This is primarily from a theory elicitation and clarification perspective.

How to conceptualise CMOs

Both papers noted difficulties in operationalising the idea of CMOs, but also had useful advice in this area. Manzano broke the concept of Context down into sub-constructs such as  characteristics of the patients, staff and infrastructure, in the setting she was examining. Punton et al introduced a new category of Intervention, alongside Context and Mechanism. In a development aid context this makes a lot of sense to me. Both authors used interviewing methods that avoided any reference to “CMOs” as a technical term

Consolidating the theories

After exploring what could be an endless variety of CMOs a RE process needs to enter a consolidation phase. Manzano points out: “In summary, this phase gives more detailed consideration to a smaller number of CMOs which belong to many families of CMOs”. Punton et al refers to a process of abstraction  that leads to more general explanations “which encompass findings from across different respondents and country settings”. This process sounds very similar in principle to the process of minimization used in QCA, which uses a more algorithm based approach. To my surprise the Punton et al paper highlights differences between QCA and RE rather than potential synergies. A good point about their paper is that it explain this stage in more detail than that by Manzano, which is more focused specifically on interview processes.

Testing the theories

The Punton et al paper does not go into this territory because of the early stage of the work that it is describing. Manzano makes more reference to this process, but mainly the context of interviews that are eliciting peoples theories. This is the territory where more light needs to be shone in future, hopefully by follow up papers by Punton et al. My continuing impression is that theory elicitation and testing are so bound up together that the process of testing is effectively not transparent and thus difficult to verify or replicate. But readers could point me to other papers where this view could be corrected…:-)

 

One thought on “Two useful papers on the practicalities of doing Realist Evaluation”

  1. I have lots of thoughts to offer on this topic, but for now I will just restrict myself to the following regarding how many people to interview. My view is that when using qualitative interviews as the instrument for data collection, it is nevertheless important to use statistically robust methods for determining how many and who to interview. Begin by identifying the complete pool of potential interviewees. If logistics permit, you can draw a statistically significant random sample from the pool. If individuals in the pool are widely separated geographically, first create geographic classes, then select classes randomly, and then select interviewees randomly from each selected class.

    It is really important to avoid opportunistic selection of interviewees, which runs a high risk of embedding interviewer bias in the sampling procedure. When doing qualitative interviewing, it is really tempting to fudge a little – “Oh, I will just add in so-and-so because I know this person will have something interesting to say.” Don’t do that.

    I am not familiar with the literature on Realist Evaluation, so I will read up on that before commenting further. I do think, though, that qualitative interview techniques, properly used, could be appropriate for achieving what this methodological approach seems to be promising. I do also think that it is both possible and desirable to form some kind of theoretical hypothesis BEFORE commencing the interviews, and use the interview process to test it.

Comments?

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: