Evaluating simple interventions that turn out to be not so simple

Conditional Cash Transfer (CCT) programs have been cited in the past as examples of projects that are suitable for testing via randomised control trials. They are relatively simple interventions that can be delivered in a standardised manner. Or so it seemed.

Last year Lant Pritchett, Salimah Samji and Jeffrey Hammer wrote this interesting (if at times difficult to read) paper “It’s All About MeE: Using Structured Experiential Learning (‘e’) to Crawl the Design Space“(the abstract is reproduced below). In the course of that paper they argued that CCT programs are not as simple as they might seem. Looking at three real life examples they identified at least 10 different characteristics of CCTs that need to be specified correctly in order for them to work as expected. Some of these involve binary choices (whether to do x or y) and some involve tuning of a numerical variable. This means there were at least 2 to the power of 10 i.e. 1024 different possible designs. They also pointed out that while changes to some of these characteristics make only a small difference to the results achieved others, including some binary choices, can make quite major differences. In other words, overall it may well be a rugged rather than a smooth design space. The question then occurs, how well are RCTs suited to exploring such spaces?

Today the World Bank Development Blog posted an interesting confirmation of the point made in Pritchett et al paper, in a blog posting titled:  Defining Conditional Cash Transfer Programs: An Unconditional Mess. Basically they are in effect pointing out that the design space is even way more complicated than Princhett et al describe!. They conclude

So, if you’re a donor or a policymaker, it is important not to frame your question to be about the relative effectiveness of “conditional” vs. “unconditional” cash transfer programs: the line between these concepts is too blurry. It turns out that your question needs to be much more precise than that. It is better to define the feasible range of options available to you first (politically, ethically, etc.), and then go after evidence of relative effectiveness of design options along the continuum from a pure UCT to a heavy-handed CCT. Alas, that evidence is the subject of another post…

So stay tuned fore their next installment. Of course you could quibble with the fact that even this conclusion is a bit optimistic, in that it talks about a a continuum of design options, when in fact it is multi-dimensional space  with both smooth and rugged bits

PS: Here is the abstract for the Printchett paper:

“There is an inherent tension between implementing organizations—which have specific objectives and narrow missions and mandates—and executive organizations—which provide resources to multiple implementing organizations. Ministries of finance/planning/budgeting allocate across ministries and projects/programmes within ministries, development organizations allocate across sectors (and countries), foundations or philanthropies allocate across programmes/grantees. Implementing organizations typically try to do the best they can with the funds they have and attract more resources, while executive organizations have to decide what and who to fund. Monitoring and Evaluation (M&E) has always been an element of the accountability of implementing organizations to their funders. There has been a recent trend towards much greater rigor in evaluations to isolate causal impacts of projects and programmes and more ‘evidence base’ approaches to accountability and budget allocations Here we extend the basic idea of rigorous impact evaluation—the use of a valid counter-factual to make judgments about causality—to emphasize that the techniques of impact evaluation can be directly useful to implementing organizations (as opposed to impact evaluation being seen by implementing organizations as only an external threat to their funding). We introduce structured experiential learning (which we add to M&E to get MeE) which allows implementing agencies to actively and rigorously search across alternative project designs using the monitoring data that provides real time performance information with direct feedback into the decision loops of project design and implementation. Our argument is that within-project variations in design can serve as their own counter-factual and this dramatically reduces the incremental cost of evaluation and increases the direct usefulness of evaluation to implementing agencies. The right combination of M, e, and E provides the right space for innovation and organizational capability building while at the same time providing accountability and an evidence base for funding agencies.” Paper available as pdf

I especially like this point  about within-project variation (on which I have argue for in the past): “Our argument is that within-project variations in design can serve as their own counter-factual and this dramatically reduces the incremental cost of evaluation and increases the direct usefulness of evaluation to implementing agencies

 

One thought on “Evaluating simple interventions that turn out to be not so simple”

  1. Dear Rick,

    A difficult read indeed. Like you I found the paper useful in how it highlighted the signifiance and value of understanding intra-project differences. However, i do not agree fully with how the authors relate it to a type of counterfactual. The requirements for comparative analysis are about understanding who among the intended beneficiaries are responding to the support on offer and how this varies (by type of support, place, type of household etc..). Collectively these define the decision uncertainities that Lawrence Salmen (1989) and Dennis Casley (1987) wrote about.

    In sum, I am not sure about how the “little e” will fly or what added value this brings to whom when the authors mistakently understand the key difference between M&E to be located on the continuum of a results chain. All they have concluded is about how good monitoring is not just about cataloguing spend, effort and throughput: it’s about listening to beneficiary stories about how they have responded to and what they think about support aid offers, and how this varies……

Comments?

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: