Learning about Measuring Advocacy and Policy Change: Are Baselines always Feasible and Desirable?

by Chris Barnett, an IDS Practice Paper in Brief, July 2013 Available as pdf

Summary: This paper captures some recent challenges that emerged from establishing a baseline for an empowerment and accountability fund. It is widely accepted that producing a baseline is logical and largely uncontested – with the recent increased investment in baselines being largely something to be welcomed. This paper is therefore not a challenge to convention, but rather a note of caution: where adaptive programming is necessary, and there are multiple pathways to success, then the ‘baseline endline’ survey tradition has its limitations. This is particularly so for interventions which seek to alter complex political-economic dynamics, such as between citizens and those in power.

Concluding paragraph: It is not that baselines are impossible, but that in such cases process tracking and ex post assessments may be necessary to capture the full extent of the results and impacts where programmes are flexible, demand-led, and working on change areas that cannot be fully specified from the outset. Developing greater robustness around methodologies to  evaluate the work of civil society – particularly E&A initiatives that seek to advocate and influence policy change – should therefore not be limited to simple baseline (plus end-line) survey traditions.

 Rick Davies’ comment: This is a welcome discussion on something that can too easily be taken for granted as a “good thing”. Years ago I was reviewing a maternal and child health project being implemented in multiple districts in Indonesia. There was baseline data for the year before the project started, and data on the same key indicators for the following four years when the project intervention took place. The problem was that the values on the indicators during the project period varied substantially from year to year, raising a big doubt in my mind as to how reliable the baseline measure was, as a measure of pre-intervention status. I suspect the pre-intervention values also varied substantially from year to year. So to be useful at all, a baseline in these circumstances would probably better be in the form of a moving average of x previous years – which would only be doable if the necessary data could be found!

Reading Chris Barnet’s paper I also recognised (in hindsight) another problem. Their  Assumption 1: The baseline is ‘year zero’ probably did not hold (as he suggests it often does not)  in a number of districts, where the same agency had already been working beforehand

Comments?

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: