LineUp: Visual Analysis of Multi-Attribute Rankings

Posted on 3 February, 2014 – 11:54 AM

Gratzl, S., A. Lex, N. Gehlenborg, H. Pfister, and M. Streit. 2013. “LineUp: Visual Analysis of Multi-Attribute Rankings.IEEE Transactions on Visualization and Computer Graphics 19 (12): 2277–86. doi:10.1109/TVCG.2013.173.

“Abstract—Rankings are a popular and universal approach to structuring otherwise unorganized collections of items by computing a rank for each item based on the value of one or more of its attributes. This allows us, for example, to prioritize tasks or to evaluate the performance of products relative to each other. While the visualization of a ranking itself is straightforward, its interpretation is not, because the rank of an item represents only a summary of a potentially complicated relationship between its attributes and those of the other items. It is also common that alternative rankings exist which need to be compared and analyzed to gain insight into how multiple heterogeneous attributes affect the rankings. Advanced visual exploration tools are needed to make this process ef?cient. In this paper we present a comprehensive analysis of requirements for the visualization of multi-attribute rankings. Based on these considerations, we propose LineUp – a novel and scalable visualization technique that uses bar charts. This interactive technique supports the ranking of items based on multiple heterogeneous attributes with different scales and semantics. It enables users to interactively combine attributes and ?exibly re?ne parameters to explore the effect of changes in the attribute combination. This process can be employed to derive actionable insights as to which attributes of an item need to be modi?ed in order for its rank to change. Additionally, through integration of slope graphs, LineUp can also be used to compare multiple alternative rankings on the same set of items, for example, over time or across different attribute combinations. We evaluate the effectiveness of the proposed multi-attribute visualization technique in a qualitative study. The study shows that users are able to successfully solve complex ranking tasks in a short period of time.”

“In this paper we propose a new technique that addresses the limitations of existing methods and is motivated by a comprehensive analysis of requirements of multi-attribute rankings considering various domains, which is the ?rst contribution of this paper. Based on this analysis, we present our second contribution, the design and implementation of LineUp, a visual analysis technique for creating, re?ning, and exploring rankings based on complex combinations of attributes. We demonstrate the application of LineUp in two use cases in which we explore and analyze university rankings and nutrition data. We evaluate LineUp in a qualitative study that demonstrates the utility of our approach. The evaluation shows that users are able to solve complex ranking tasks in a short period of time.”

Rick Davies comment: I have been a long time advocate of the usefullness of ranking measures in evaluation, because they can combine subjective judgements with numerical values. This tool is focused on ways of visualising and manipulating existing data rather than elicitation of the ranking data (a seperate and important issue of its own). It includes lot of options for weighting different attributes to produce overall ranking scores

Free open source software, instructions, example data sets, introductory videos and more available here

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Qualitative Comparative Analysis (QCA) An application to compare national REDD+ policy processes

Posted on 31 January, 2014 – 11:35 AM

 

Sehring, Jenniver, Kaisa Korhonen-Kurki, and Maria Brockhaus. 2013. “Qualitative Comparative Analysis (QCA) An Application to Compare National REDD+ Policy Processes”. CIFOR. http://www.cifor.org/publications/pdf_files/WPapers/WP121Sehring.pdf.

“This working paper gives an overview of Qualitative Comparative Analysis (QCA), a method that enables systematic cross-case comparison of an intermediate number of case studies. It presents an overview of QCA and detailed descriptions of different versions of the method. Based on the experience applying QCA to CIFOR’s Global Comparative Study on REDD+, the paper shows how QCA can help produce parsimonious and stringent research results from a multitude of in-depth case studies developed by numerous researchers.QCA can be used as a structuring tool that allows researchers to share understanding and produce coherent data, as well as a tool for making inferences usable for policy advice.

REDD+ is still a young policy domain, and it is a very dynamic one. Currently, the benefits of QCA result mainly from the fact that it helps researchers to organize the evidence generated. However, with further and more differentiated case knowledge, and more countries achieving desired outcomes, QCA has the potential to deliver robust analysis that allows the provision of information, guidance and recommendations to ensure carbon-effective, cost-efficient and equitable REDD+ policy design and implementation.”

Rick Davies comment: I like this paper because it provides a good how-to-do-it overview of different forms of QCA, illustrated in a step-by-step fashion with one practical case example.  It may not be quite enough to enable one to do a QCA from the very start, but it provides a very good starting point

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

The Science of Evaluation: A Realist Manifesto

Posted on 30 January, 2014 – 11:29 PM

Pawson, Ray. 2013. The Science of Evaluation: A Realist Manifesto. UK: Sage Publications. http://www.uk.sagepub.com

Chapter 1 is available as a pdf. Hopefully other chapters will also become available this way, because this 240 page book is expensive.

Contents

Preface: The Armchair Methodologist and the Jobbing Researcher
PART ONE: PRECURSORS AND PRINCIPLES
Precursors: From the Library of Ray Pawson
First Principles: A Realist Diagnostic Workshop
PART TWO: THE CHALLENGE OF COMPLEXITY – DROWNING OR WAVING?
A Complexity Checklist
Contested Complexity
Informed Guesswork: The Realist Response to Complexity
PART THREE: TOWARDS EVALUATION SCIENCE
Invisible Mechanisms I: The Long Road to Behavioural Change
Invisible Mechanisms II: Clinical Interventions as Social Interventions
Synthesis as Science: The Bumpy Road to Legislative Change
Conclusion: A Mutually Monitoring, Disputatious Community of Truth Seekers

Reviews

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Twelve reasons why climate change adaptation M&E is challenging

Posted on 21 January, 2014 – 10:34 AM

Bours, Dennis, Colleen McGinn, and Patrick Pringle. 2014. “Guidance Note 1: Twelve Reasons Why Climate Change Adaptation M&E Is Challenging.” SeaChange & UKCIP     Available as a pdf

“Introduction:  Climate change adaptation (CCA) refers to how people and systems adjust to the actual or expected effects of climate change. It is often presented as a cyclical process developed in response to climate change impacts or their social, political, and economic consequences. There has been a recent upsurge of interest in CCA among international development agencies resulting in stand-alone adaptation programs as well as efforts to mainstream CCA into existing development strategies. The scaling up of adaptation efforts and the iterative nature of the adaptation process means that Monitoring and Evaluation (M&E) will play a critical role in informing and improving adaptation polices and activities. Although many CCA programmes may look similar to other development interventions, they do have specific and distinct characteristics that set them apart. These stem from the complex nature of adaptation itself. CCA is a dynamic process that cuts across scales and sectors of intervention, and extends long past any normal project cycle. It is also inherently uncertain: we cannot be entirely sure about the course of climate change consequences, as these will be shaped by societal decisions taken in the future. How then should we define, measure, and assess the achievements of an adaptation programme?  The complexities inherent in climate adaptation programming call for a nuanced approach to M&E research. This is not, however, always being realised in practice. CCA poses a range of thorny challenges for evaluators. In this Guidance Note, we identify twelve challenges that make M&E of CCA programmes difficult, and highlight strategies to address each. While most are not unique to CCA, together they present a distinctive package of dilemmas that need to be addressed.”

See also: Bours, Dennis, Colleen McGinn, and Patrick Pringle. 2013. Monitoring and evaluation for climate change adaptation: A synthesis of tools, frameworks and approaches, UKCIP & SeaChange, pdf version (3.4 MB)

See also:  Dennis Bours, Colleen McGinn, Patrick Pringle, 2014, “Guidance Note 2: Selecting indicators for climate change adaptation programming” SEA Change CoP, UKCIP

” This second Guidance Note follows on from that discussion with a narrower question: how does one go about choosing appropriate indicators? We begin with a brief review of approaches to CCA programme design, monitoring, and evaluation (DME). We then go on to discuss how to identify appropriate indicators. We demonstrate that CCA does not necessarily call for a separate set of indicators; rather, the key is to select a medley that appropriately frames progress towards adaptation and resilience. To this end, we highlight the importance of process indicators, and conclude with remarks about how to use indicators thoughtfully and well”

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Monitoring and evaluating civil society partnerships

Posted on 9 January, 2014 – 5:24 PM

A GSDRC Help Desk response

Request: Please identify approaches and methods used by civil society organisations (international NGOs and others) to monitor and evaluate the quality of their relationships with partner (including southern) NGOs. Please also provide a short comparative analysis.

Helpdesk response

Key findings: This report lists and describes tools used by NGOs to monitor the quality of their relationships with partner organisations. It begins with a brief analysis of the types of tools and their approaches, then describes each tool. This paper focuses on tools which monitor the partnership relationship itself, rather than the impact or outcomes of the partnership. While there is substantial general literature on partnerships, there is less literature on this particular aspect.

Within the development literature, ‘partnership’ is most often used to refer to international or high-income country NGOs partnering with low-income country NGOs, which may be grassroots or small-scale. Much of a ‘north-south’ partnership arrangement centres around funding, meaning accountability arrangements are often reporting and audit requirements (Brehm, 2001). As a result, much of the literature and analysis is heavily biased towards funding and financial accountability. There is a commonly noted power imbalance in the literature, with northern partners controlling the relationship and requiring southern partners to report to them on use of funds. Most partnerships are weak on ensuring Northern accountability to Southern organisations (Brehm, 2001). Most monitoring tools are aimed at bilateral partnerships.

The tools explored in the report are those which evaluate the nature of the partnership, rather than the broader issue of partnership impact. The ‘quality’ of relationships is best described by BOND, in which the highest quality of partnership is described as joint working, adequate time and resources allocated specifically to partnership working, and improved overall effectiveness. Most of the tools use qualitative, perception-based methods including interviewing staff from both partner organisations and discussing relevant findings. There are not many specific tools available, as most organisations rely on generic internal feedback and consultation sessions, rather than comprehensive monitoring and evaluation of relationships. Resultantly, this report only presents six tools, as these were the most referred to by experts.

Full response: http://www.gsdrc.org/docs/open/HDQ1024.pdf

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

DCED Global Seminar on Results Measurement 24-26 March 2014, Bangkok

Posted on 6 January, 2014 – 11:30 PM

Full text available here: http://www.enterprise-development.org/page/seminar2014

“Following popular demand, the DCED is organising the second Global Seminar on results measurement in the field of private sector development (PSD), 24-26 March 2014 in Bangkok, Thailand. The Seminar is being organised in cooperation with the ILO and with financial support from the Swiss State Secretariat for Economic Affairs (SECO). It will have a similar format to the DCED Global Seminar in 2012, which was attended by 100 participants from 54 different organisations, field programmes and governments.

Since 2012, programmes and agencies have been adopting the DCED Standard for results measurement in increasing numbers; recently, several have published the reports of their DCED audit. This Seminar will explore what is currently known, and what we need to know; specifically, the 2014 Seminar is likely to be structured as follows:

  • An introduction to the DCED, its Results Measurement Working Group, the DCED Standard for results measurement and the Standard audit system
  • Insights from 10 programmes experienced with the Standard, based in Bangladesh, Cambodia, Fiji, Georgia, Kenya, Nepal, Nigeria and elsewhere (further details to come)
  • Perspectives from development agencies on results measurement
  • Cross cutting issues, such as the interface between the Standard and evaluation, measuring systemic change, and using results in decision-making
  • A review of the next steps in learning, guidance and experience around the Standard
  • Further opportunities for participants to meet each other, learn about each others’ programmes and make contacts for later follow-up

You are invited to join the Seminar as a participant. Download the registration form here, and send to Admin@Enterprise-Development.org. There is a fee of $600 for those accepted for participation, and all participants must pay their own travel, accommodation and insurance costs. Early registration is advised.”

VN:F [1.9.22_1171]
Rating: -1 (from 1 vote)

Tags: ,

The Availability of Research Data Declines Rapidly with Article Age

Posted on 22 December, 2013 – 2:16 PM

Summarised on SciDevNet, as “Most research data lost as scientists switch storage tech” from this source:

Current Biology, 19 December 2013
Copyright © 2014 Elsevier Ltd All rights reserved.
10.1016/j.cub.2013.11.014

Authors

Highlights

  • We examined the availability of data from 516 studies between 2 and 22 years old
  • The odds of a data set being reported as extant fell by 17% per year
  • Broken e-mails and obsolete storage devices were the main obstacles to data sharing
  • Policies mandating data archiving at publication are clearly needed

Summary

“Policies ensuring that research data are available on public archives are increasingly being implemented at the government [1], funding agency [2,3,4], and journal [5,6] level. These policies are predicated on the idea that authors are poor stewards of their data, particularly over the long term [7], and indeed many studies have found that authors are often unable or unwilling to share their data [8,9,10,11]. However, there are no systematic estimates of how the availability of research data changes with time since publication. We therefore requested data sets from a relatively homogenous set of 516 articles published between 2 and 22 years ago, and found that availability of the data was strongly affected by article age. For papers where the authors gave the status of their data, the odds of a data set being extant fell by 17% per year. In addition, the odds that we could find a working e-mail address for the first, last, or corresponding author fell by 7% per year. Our results reinforce the notion that, in the long term, research data cannot be reliably preserved by individual researchers, and further demonstrate the urgent need for policies mandating data sharing via public archives.”

Rick Davies comment: I suspect the situation with data generated by development aid projects (and their evaluations) is much, much worse. I have been unable to get access to data generated within the last 12 months by one DFID co-funded project in Africa . I am now trying to see if data used in a recent analysis of the (DFID funded) Chars Livelihoods Programme is available.

I am also making my own episodic attempts to make data sets publicly available that have been generated by my own work in the past. One is a large set of hosuehold survey data from Mogadishu in 1986, and another is household survey data from Vietnam generated in 1996 (baseline) and 2006 (follow up). One of the challenges is finding a place on the internet that specialises in making such data available (especially development project data). Any ideas?

PS 2014 01 07: Missing raw data is not the only problem. Lack of contact information about the evaluators/researchers who were associated with the data collection is another one. In their exemplary blog about their use of QCA Raab and Stuppert comment about their search for evaluation reports:

Most of the 74 evaluation reports in our first coding round do not display the evaluator’s or the commissioner’s contact details. In some cases, the evaluators remain anonymous; in other cases, the only e-mail address available in the report is a generic info@xyz.org. This has surprised us – in our own evaluation practice, we always include our e-mail addresses so that our counterparts can get in touch with us in case, say, they wish to work with us again”

PS 2014 02 01 Here is another interesting article about missing data and missing policies about making data available: Troves of Personal Data, Forbidden to Researchers (NYT, May 21, 2012)

“At leading social science journals, there are few clear guidelines on data sharing. “The American Journal of Sociology does not at present have a formal position on proprietary data,” its editor, Andrew Abbott, a sociologist at the University of Chicago, wrote in an e-mail. “Nor does it at present have formal policies enforcing the sharing of data.”

 The problem is not limited to the social sciences. A recent review found that 44 of 50 leading scientific journals instructed their authors on sharing data but that fewer than 30 percent of the papers they published fully adhered to the instructions. A 2008 review of sharing requirements for genetics data found that 40 of 70 journals surveyed had policies, and that 17 of those were “weak.””

VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Aid on the Edge of Chaos…

Posted on 12 December, 2013 – 1:27 PM

… Rethinking International Cooperation in a Complex World

by Ben Ramalingam, Oxford University Press, 2013. Viewable in part via Google Books (and fully searchable with key words)

Publishers summary:

A ground breaking book on the state of the aid business, bridging policy, practice and science. Gets inside the black box of aid to highlight critical flaws in the ways agencies learn, strategise, organise, and evaluate themselves. Shows how ideas from the cutting edge of complex systems science have been used to address social, economic and political issues, and how they can contribute to the transformation of aid. An open accessible style with cartoons by a leading illustrator. Draws on workshops, conferences, over five years of research, and hundreds of interviews.

Rick Davies comments (but not a review): Where to start…? This is a big book, in size and ambition. But also in the breadth of the author’s knowledge and contacts in the field. There have been many reviews of the book, so I will simply link to some here, to start with: Duncan Green (Oxfam), Tom Kirk (LSE), Nick Perkins (AllAfrica), Paul van Gardingen and Andrée Carter (SciDevnet), Melissa Leach (Steps Centre), Owen Barder, Philip Ball , IRIN , New Scientist and Lucy Noonan (Clear Horizon)  See also Ben’s own Aid on the Edge of Chaos blog.

Evauation issues are discussed in two sections:  Watching the Watchman (pages 101-122), and Performance Dynamics, Dynamic Performance (pages 351-356). That is about 7% of the book as a whole, which is a bigger percentage than most development projects spend on evaluation! Of course there is a lot more to Ben’s book that relates to evaluation outside of these sections.

One view of the idea of systems being on the edge of chaos is that it is about organisations (biological and social) evolving to a point where they find a viable balance  between sensitivity to new information and retention of past information (as embedded in existing structures and processes) i.e learning strategies. That said, what strikes me the most about aid organisations, as a sector, is how stable they are. Perhaps way too stable. Mortality rates are very low compared to private sector enterprises. Does this suggest that as a set aid organisations are not as effective at learning as they could be?

I also wondered to what extent the idea of being on the edge of chaos (i.e a certain level of complexity) could be operationalised/measured, and thus developed into something that was more than a metaphor. However, Ben and other authors (Melanie Mitchel) have highlighted the limitations of various attempts to measure complexity. In fact the very attempt to do so, at least in a single (i.e. one dimensional) measure seems somewhat ironical. But perhaps degrees of complexity could be mapped in a space defined by multiple measures? For example: (a) diversity of agents, (b) density of connections between them, (c) the degrees of freedom or agency each agent has. …a speculation.

Ben has been kind enough to quote some of my views on complexity issues, including those on the representation of complexity (page 351). The limitations of linear Theories of Change (ToC) are discussed at various points in the book, and alternatives are explored, including network models and agent based simulation models. While I am sympathetic to their wider use I do continue to be surprised at how little complexity aid agency staff can actually cope with when presented with a ToC that has to be a working part of a Monitoring and Evaluation Framework, for a project. And I have a background concern that the whole enthusiasm for ToCs these days still belies a deep desire for plan-ablity that in reality is at odds with the real world within which aid agencies work.

In his chapter on Dynamic Change Ben describes an initiative called Artificial Intelligence for Development and the attempt to use quantitative approaches  and “big data”  sources to understand more about the dynamics of development (e.g. market movements, migration, and more) as they occur, or at least shortly afterwards. Mobile phone usage being one of the data sets that are becoming more available in many locations around the world. I think this is fascinating stuff, but it is in stark contrast with my experience of the average development project, where there is little in the way of readily available big data that is or could be used for project management and wider lesson learning. Where there is survey data it is rarely publicly available, although the open data and transparency movements are starting to have some effect.

On the more positive side, where data is available, there are new “big data” approaches that agencies can use and adapt. There is now an array of data mining methods that can be used to inductively find patterns (clusters and associations) in data sets, some of which are free and open source (See Rapid Miner). While these searches can be informed by prior theories, they are not necessarily locked in by them – they are open to discovery of unexpected patterns and surprise. Whereas the average ToC is a relatively small and linear construct, data mining software can quickly and systematically explore relationships within much larger sets of attributes/measures describing the interventions, their targets and their wider context.

Some of the complexity science concepts described in the book provide limited added value, in my view. For example, the idea of a fitness landscape, which comes from evolutionary theory. Some of its proposed use, as in chapter 17, is almost a self caricature: “Implementers first need to establish the overall space of possiblities for a given project, programme or policy, then ‘dynamically crawl the design space  by simultaneously trying out design alternatives and then adapting  the project sequentially based on the results” (Pritchett et al). On the other hand, there were some ideas I would definitely like to follow up on, most notably agent based modelling, especially participatory based modeling (pages 175-80, 283-95). Simulations are evaluable, in two ways: by analysis of fit with historic data and accuracy of predictions of future data points. But they do require data, and that perhaps is an issue that could be explored a bit more. When facing uncertain futures and when using a portfolio of strategies to cope with that uncertainty a lot more data is needed than when pursuing a single intervention in a more stable and predictable environment. [end of ramble :-)

 

VN:F [1.9.22_1171]
Rating: +3 (from 3 votes)

Multiple Pathways to Policy Impact: Testing an Uptake Theory with QCA

Posted on 10 December, 2013 – 3:20 PM

by Barbara Befani, IDS Centre for Development Impact, PRACTICE PAPER. Number 05 October 2013. Available as pdf

Abstract: Policy impact is a complex process influenced by multiple factors. An intermediate step in this process is policy uptake, or the adoption of measures by policymakers that reflect research findings and recommendations. The path to policy uptake often involves activism, lobbying and advocacy work by civil society organisations, so an earlier intermediate step could be termed ‘advocacy uptake’; which would be the use of research findings and recommendations by Civil Society Organisations (CSOs) in their efforts to influence government policy. This CDI Practice Paper by Barbara Befani proposes a ‘broad-brush’ theory of policy uptake (more precisely of ‘advocacy uptake’) and then tests it using two methods: (1) a type of statistical analysis and (2) a variant of Qualitative Comparative Analysis (QCA). The pros and cons of both families of methods are discussed in this paper, which shows that QCA offers the power of generalisation whilst also capturing some of the complexity of middle-range explanation. A limited number of pathways to uptake are identified, which are at the same time moderately sophisticated (considering combinations of causal factors rather than additions) and cover a medium number of cases (40), allowing a moderate degree of generalisation. – See more at: http://www.ids.ac.uk/publication/multiple-pathways-to-policy-impact-testing-an-uptake-theory-with-qca#sthash.HEg4Smra.dpuf

Rick Davies comment: What I  like about this paper is the way it shows, quite simply, how measurements of the contribution of different possible causal conditions in terms of averages, and correlations between these, can be uniformative and even misleading. In contrast, a QCA analysis of the different configurations of causal conditions can be much more enlightening and easier to relate to what are often complex realities in the ground.

I have taken the liberty of re-analysing the fictional data set provided in the annex, using a Decision Tree software (within RapidMiner). This is a means of triangulating the results of QCA analyses. It uses the same kind of data set and produces results which are comparable in structure, but the method of analysis is different. Shown below is a Decision Tree representing seven configurations of conditions that can be found in Befani’s data set of 40 cases. It makes use of 4 of the five conditions described in the paper. These are shown as nodes in the tree diagram.

Befani 2013 10(click on image to enlarge and get a clearer image!)

The 0 and 1 values on the various branches indicate whether the condition immediately above is present or not. The first configuration on the left says that if there is no ACCESS then research UPTAKE (12 cases at the red leaf) does not take place. This is a statement of a sufficient cause. The branch on the right, represents a configuration of three conditions, which says that where ACCESS to research is present, and recommendations are consistent with measures previously (PREV) recommended by the organisation, and where the research findings are disseminated within the organisation by a local ‘champion (CHAMP) then research UPTAKE  (8 cases at the blue leaf) does take place.

Overall the findings shown in the Decision Tree model are consistent with the QCA analyses in terms of the number of configurations (seven) and the configurations that are associated with the largest number of cases (i.e. their coverage). However there were small differences in descriptions of two sets of cases where there was no uptake (red leaves). In the third branch (configuration) from the left above, the QCA analysis indicated that it was the presence of INTERNAL CONFLICT (different approaches to the same policy problem within the organisation) that played a role, rather than the presence of a (perhaps ineffectual) CHAMPION. In the third branch (configuration) from the right the QCA analysis proposed a fourth necessary condition (QUALITY), in addtion to the three shown in the Decision Tree. Here the Decision Tree seems the more parsimonious solution. However, in both sets of cases where differences in findings have occured it would make most sense to then proceed with within-case investigations of the causal processes at work.

PS: Here is the dataset, in case anyone wants to play with it

VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Learning about Measuring Advocacy and Policy Change: Are Baselines always Feasible and Desirable?

Posted on 6 December, 2013 – 2:03 PM

by Chris Barnett, an IDS Practice Paper in Brief, July 2013 Available as pdf

Summary: This paper captures some recent challenges that emerged from establishing a baseline for an empowerment and accountability fund. It is widely accepted that producing a baseline is logical and largely uncontested – with the recent increased investment in baselines being largely something to be welcomed. This paper is therefore not a challenge to convention, but rather a note of caution: where adaptive programming is necessary, and there are multiple pathways to success, then the ‘baseline endline’ survey tradition has its limitations. This is particularly so for interventions which seek to alter complex political-economic dynamics, such as between citizens and those in power.

Concluding paragraph: It is not that baselines are impossible, but that in such cases process tracking and ex post assessments may be necessary to capture the full extent of the results and impacts where programmes are flexible, demand-led, and working on change areas that cannot be fully specified from the outset. Developing greater robustness around methodologies to  evaluate the work of civil society – particularly E&A initiatives that seek to advocate and influence policy change – should therefore not be limited to simple baseline (plus end-line) survey traditions.

 Rick Davies’ comment: This is a welcome discussion on something that can too easily be taken for granted as a “good thing”. Years ago I was reviewing a maternal and child health project being implemented in multiple districts in Indonesia. There was baseline data for the year before the project started, and data on the same key indicators for the following four years when the project intervention took place. The problem was that the values on the indicators during the project period varied substantially from year to year, raising a big doubt in my mind as to how reliable the baseline measure was, as a measure of pre-intervention status. I suspect the pre-intervention values also varied substantially from year to year. So to be useful at all, a baseline in these circumstances would probably better be in the form of a moving average of x previous years – which would only be doable if the necessary data could be found!

Reading Chris Barnet’s paper I also recognised (in hindsight) another problem. Their  Assumption 1: The baseline is ‘year zero’ probably did not hold (as he suggests it often does not)  in a number of districts, where the same agency had already been working beforehand

VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)