1. Punton, M., Vogel, I., & Lloyd, R. (2016, April). Reflections from a Realist Evaluation in Progress: Scaling Ladders and Stitching Theory. IDS. Available here
Interviews as sources of knowledge
Interviews of stakeholders about if, how and why a program works, are a key resource in most REs (Punton et al). Respondents views are both sources of theories and sources of evidence for and against those theories, and there seems to be potential for mixing these up in way that the process of theory elicitation and testing becomes less explicit than it should be. Punton et al have partially addressed this by coding the status of views about reported outcomes as “observed”, anticipated” or “implied”. The same approach could be taken with recording of respondents’ views on the context and mechanisms involved.
Manzano makes a number of useful distinctions between RE and constructivist interview approaches. But one distinction that is made seems unrealistic, so to speak. “…data collected through qualitative interviews are not considered constructions. Data are instead considered “evidence for real phenomena and processes”. But respondents themselves, as shown in some quotes in the paper, will indicate that on some issues they are not sure, they have forgotten or they are guessing. What is real here is that respondents are themselves making their best efforts to construct some sense out of a situation.So the issue of careful coding of the status of respondents’ views, as to whether they are theories or not, and if observations, what status these have, is important.
How many people to interview
How to interview
Both papers had lots of useful advice on how to interview, from a RE perspective. This is primarily from a theory elicitation and clarification perspective.
How to conceptualise CMOs
Both papers noted difficulties in operationalising the idea of CMOs, but also had useful advice in this area. Manzano broke the concept of Context down into sub-constructs such as characteristics of the patients, staff and infrastructure, in the setting she was examining. Punton et al introduced a new category of Intervention, alongside Context and Mechanism. In a development aid context this makes a lot of sense to me. Both authors used interviewing methods that avoided any reference to “CMOs” as a technical term
Consolidating the theories
After exploring what could be an endless variety of CMOs a RE process needs to enter a consolidation phase. Manzano points out: “In summary, this phase gives more detailed consideration to a smaller number of CMOs which belong to many families of CMOs”. Punton et al refers to a process of abstraction that leads to more general explanations “which encompass findings from across different respondents and country settings”. This process sounds very similar in principle to the process of minimization used in QCA, which uses a more algorithm based approach. To my surprise the Punton et al paper highlights differences between QCA and RE rather than potential synergies. A good point about their paper is that it explain this stage in more detail than that by Manzano, which is more focused specifically on interview processes.
Testing the theories
The Punton et al paper does not go into this territory because of the early stage of the work that it is describing. Manzano makes more reference to this process, but mainly the context of interviews that are eliciting peoples theories. This is the territory where more light needs to be shone in future, hopefully by follow up papers by Punton et al. My continuing impression is that theory elicitation and testing are so bound up together that the process of testing is effectively not transparent and thus difficult to verify or replicate. But readers could point me to other papers where this view could be corrected…:-)
I have lots of thoughts to offer on this topic, but for now I will just restrict myself to the following regarding how many people to interview. My view is that when using qualitative interviews as the instrument for data collection, it is nevertheless important to use statistically robust methods for determining how many and who to interview. Begin by identifying the complete pool of potential interviewees. If logistics permit, you can draw a statistically significant random sample from the pool. If individuals in the pool are widely separated geographically, first create geographic classes, then select classes randomly, and then select interviewees randomly from each selected class.
It is really important to avoid opportunistic selection of interviewees, which runs a high risk of embedding interviewer bias in the sampling procedure. When doing qualitative interviewing, it is really tempting to fudge a little – “Oh, I will just add in so-and-so because I know this person will have something interesting to say.” Don’t do that.
I am not familiar with the literature on Realist Evaluation, so I will read up on that before commenting further. I do think, though, that qualitative interview techniques, properly used, could be appropriate for achieving what this methodological approach seems to be promising. I do also think that it is both possible and desirable to form some kind of theoretical hypothesis BEFORE commencing the interviews, and use the interview process to test it.