Reflections on research processes in a development NGO: FIVDB’s survey in 2013 of the change in household conditions and of the effect of livelihood trainings

Received from Aldo Benini:

“Development NGOs are under increasing pressure to demonstrate impact. The methodological rigor of impact studies can challenge those with small research staffs and/or insufficient capacity to engage with outside researchers. “Reflections on research processes in a development NGO: Friends In Village Development Bangladesh’s (FIVDB) survey in 2013 of the change in household conditions and of the effect of livelihood trainings” (2013, with several others) grapples with some related dilemmas. On one side, it is a detailed and careful account of how a qualitative methodology known as “Community-based Change Ranking” and data from previous baseline surveys were combined to derive an estimate of the livelihood training effect distinct from highly diverse changes in household conditions. In the process, over 9,000 specific verbal change statements were condensed into a succinct household typology. On the other side, the report discusses challenges that regularly arise from the study design to the dissemination of findings. The choice of an intuitive impact metric (as opposed to one that may seem the best in the eyes of the analyst) and the communication of uncertainty in the findings are particularly critical.”

Produced by Aldo Benini, Wasima Samad Chowdhury, Arif Azad Khan, Rakshit Bhattacharjee, Friends In Village Development Bangladesh (FIVDB), 12 November 2013

PS: See also...

“Personal skills and social action” (2013, together with several others) is a sociological history of the 35-year effort, by Friends In Village Development Bangladesh (FIVDB), to create and amplify adult literacy training when major donors and leading NGOs had opted out of this sector. It is written in Amartya Sen’s perspective that

 “Illiteracy and innumeracy are forms of insecurity in themselves. Not to be able to read or write or count or communicate is itself a terrible deprivation. And if a person is thus reduced by illiteracy and innumeracy, we can not only see that the person is insecure to whom something terrible could happen, but more immediately, that to him or her, something terrible has actually happened”.

The study leads the reader from theories of literacy and human development through adult literacy in Bangladesh and the expert role of FIVDB to the learners’ experience and a concept of communicative competency that opens doors of opportunity. Apart from organizational history, the empirical research relied on biographic interviews with former learners and trainers, proportional piling to self-evaluate relevance and ability, analysis of test scores as well as village development budget simulations conducted with 33 Community Learning Center committees. A beautifully illustrated printed version is available from FIVDB.

 

Meta-evaluation of USAID’s Evaluations: 2009-2012

Author(s):Molly Hageboeck, Micah Frumkin, and Stephanie Monschein
Date Published:November 25, 2013

Report available as a pdf (a big file). See also video and PP presentations (worth reading!)

Context and Purpose

This evaluation of evaluations, or meta-evaluation, was undertaken to assess the quality of USAID’s evaluation reports. The study builds on USAID’s practice of periodically examining evaluation quality to identify opportunities for improvement. It covers USAID evaluations completed between January 2009 and December 2012. During this four-year period, USAID launched an ambitious effort called USAID Forward, which aims to integrate all aspects of the Agency’s programming approach, including program and project evaluations, into a modern, evidence-based system for realizing development results. A key element of this initiative is USAID’s Evaluation Policy, released in January 2011.

Meta-Evaluation Questions

The meta-evaluation on which this volume reports systematically examined 340 randomly selected evaluations and gathered qualitative data from USAID staff and evaluators to address three questions:

1. To what degree have quality aspects of USAID’s evaluation reports, and underlying practices, changed over time?

2. At this point in time, on which evaluation quality aspects or factors do USAID’s evaluation reports excel and where are they falling short?

3. What can be determined about the overall quality of USAID evaluation reports and where do the greatest opportunities for improvement lie?

 Meta-Evaluation Methodology and Study Limitations

The framework for this study recognizes that undertaking an evaluation involves a partnership between the client for an evaluation (USAID) and the evaluation team. Each party plays an important role in ensuring overall quality. Information on basic characteristics and quality aspects of 340 randomly selected USAID evaluation reports was a primary source for this study. Quality aspects of these evaluations were assessed using a 37-element checklist. Conclusions reached by the meta-evaluation also drew from results of four small-group interviews with staff from USAID’s technical and regional bureaus in Washington, 15 organizations that carry out evaluations for USAID, and a survey of 25 team leaders of recent USAID evaluations. MSI used chi-square and t–tests to analyze rating data. Qualitative data were analyzed using content analyses. No specific study limitation unduly hampered MSI’s ability to obtain or analyze data needed to address the three meta-evaluation questions. Nonetheless, the study would have benefited from reliable data on the cost and duration of evaluations, survey or conference call interviews with USAID Mission staff, and the consistent inclusion of the names of evaluation team leaders in evaluation reports.”

Rick Davies comment: Where is the dataset? 340 evaluations were scored on a 37 point checklist. Ten of the 37 checklist items used to creat an overall “score” This data could be analysed in N different ways by many more people, it it was made readily available. Responses please, from anyone..

 

LineUp: Visual Analysis of Multi-Attribute Rankings

Gratzl, S., A. Lex, N. Gehlenborg, H. Pfister, and M. Streit. 2013. “LineUp: Visual Analysis of Multi-Attribute Rankings.IEEE Transactions on Visualization and Computer Graphics 19 (12): 2277–86. doi:10.1109/TVCG.2013.173.

“Abstract—Rankings are a popular and universal approach to structuring otherwise unorganized collections of items by computing a rank for each item based on the value of one or more of its attributes. This allows us, for example, to prioritize tasks or to evaluate the performance of products relative to each other. While the visualization of a ranking itself is straightforward, its interpretation is not, because the rank of an item represents only a summary of a potentially complicated relationship between its attributes and those of the other items. It is also common that alternative rankings exist which need to be compared and analyzed to gain insight into how multiple heterogeneous attributes affect the rankings. Advanced visual exploration tools are needed to make this process ef?cient. In this paper we present a comprehensive analysis of requirements for the visualization of multi-attribute rankings. Based on these considerations, we propose LineUp – a novel and scalable visualization technique that uses bar charts. This interactive technique supports the ranking of items based on multiple heterogeneous attributes with different scales and semantics. It enables users to interactively combine attributes and ?exibly re?ne parameters to explore the effect of changes in the attribute combination. This process can be employed to derive actionable insights as to which attributes of an item need to be modi?ed in order for its rank to change. Additionally, through integration of slope graphs, LineUp can also be used to compare multiple alternative rankings on the same set of items, for example, over time or across different attribute combinations. We evaluate the effectiveness of the proposed multi-attribute visualization technique in a qualitative study. The study shows that users are able to successfully solve complex ranking tasks in a short period of time.”

“In this paper we propose a new technique that addresses the limitations of existing methods and is motivated by a comprehensive analysis of requirements of multi-attribute rankings considering various domains, which is the ?rst contribution of this paper. Based on this analysis, we present our second contribution, the design and implementation of LineUp, a visual analysis technique for creating, re?ning, and exploring rankings based on complex combinations of attributes. We demonstrate the application of LineUp in two use cases in which we explore and analyze university rankings and nutrition data. We evaluate LineUp in a qualitative study that demonstrates the utility of our approach. The evaluation shows that users are able to solve complex ranking tasks in a short period of time.”

Rick Davies comment: I have been a long time advocate of the usefullness of ranking measures in evaluation, because they can combine subjective judgements with numerical values. This tool is focused on ways of visualising and manipulating existing data rather than elicitation of the ranking data (a seperate and important issue of its own). It includes lot of options for weighting different attributes to produce overall ranking scores

Free open source software, instructions, example data sets, introductory videos and more available here

Qualitative Comparative Analysis (QCA) An application to compare national REDD+ policy processes

 

Sehring, Jenniver, Kaisa Korhonen-Kurki, and Maria Brockhaus. 2013. “Qualitative Comparative Analysis (QCA) An Application to Compare National REDD+ Policy Processes”. CIFOR. http://www.cifor.org/publications/pdf_files/WPapers/WP121Sehring.pdf.

“This working paper gives an overview of Qualitative Comparative Analysis (QCA), a method that enables systematic cross-case comparison of an intermediate number of case studies. It presents an overview of QCA and detailed descriptions of different versions of the method. Based on the experience applying QCA to CIFOR’s Global Comparative Study on REDD+, the paper shows how QCA can help produce parsimonious and stringent research results from a multitude of in-depth case studies developed by numerous researchers.QCA can be used as a structuring tool that allows researchers to share understanding and produce coherent data, as well as a tool for making inferences usable for policy advice.

REDD+ is still a young policy domain, and it is a very dynamic one. Currently, the benefits of QCA result mainly from the fact that it helps researchers to organize the evidence generated. However, with further and more differentiated case knowledge, and more countries achieving desired outcomes, QCA has the potential to deliver robust analysis that allows the provision of information, guidance and recommendations to ensure carbon-effective, cost-efficient and equitable REDD+ policy design and implementation.”

Rick Davies comment: I like this paper because it provides a good how-to-do-it overview of different forms of QCA, illustrated in a step-by-step fashion with one practical case example.  It may not be quite enough to enable one to do a QCA from the very start, but it provides a very good starting point

The Science of Evaluation: A Realist Manifesto

Pawson, Ray. 2013. The Science of Evaluation: A Realist Manifesto. UK: Sage Publications. http://www.uk.sagepub.com

Chapter 1 is available as a pdf. Hopefully other chapters will also become available this way, because this 240 page book is expensive.

Contents

Preface: The Armchair Methodologist and the Jobbing Researcher
PART ONE: PRECURSORS AND PRINCIPLES
Precursors: From the Library of Ray Pawson
First Principles: A Realist Diagnostic Workshop
PART TWO: THE CHALLENGE OF COMPLEXITY – DROWNING OR WAVING?
A Complexity Checklist
Contested Complexity
Informed Guesswork: The Realist Response to Complexity
PART THREE: TOWARDS EVALUATION SCIENCE
Invisible Mechanisms I: The Long Road to Behavioural Change
Invisible Mechanisms II: Clinical Interventions as Social Interventions
Synthesis as Science: The Bumpy Road to Legislative Change
Conclusion: A Mutually Monitoring, Disputatious Community of Truth Seekers

Reviews

Twelve reasons why climate change adaptation M&E is challenging

Bours, Dennis, Colleen McGinn, and Patrick Pringle. 2014. “Guidance Note 1: Twelve Reasons Why Climate Change Adaptation M&E Is Challenging.” SeaChange & UKCIP     Available as a pdf

“Introduction:  Climate change adaptation (CCA) refers to how people and systems adjust to the actual or expected effects of climate change. It is often presented as a cyclical process developed in response to climate change impacts or their social, political, and economic consequences. There has been a recent upsurge of interest in CCA among international development agencies resulting in stand-alone adaptation programs as well as efforts to mainstream CCA into existing development strategies. The scaling up of adaptation efforts and the iterative nature of the adaptation process means that Monitoring and Evaluation (M&E) will play a critical role in informing and improving adaptation polices and activities. Although many CCA programmes may look similar to other development interventions, they do have specific and distinct characteristics that set them apart. These stem from the complex nature of adaptation itself. CCA is a dynamic process that cuts across scales and sectors of intervention, and extends long past any normal project cycle. It is also inherently uncertain: we cannot be entirely sure about the course of climate change consequences, as these will be shaped by societal decisions taken in the future. How then should we define, measure, and assess the achievements of an adaptation programme?  The complexities inherent in climate adaptation programming call for a nuanced approach to M&E research. This is not, however, always being realised in practice. CCA poses a range of thorny challenges for evaluators. In this Guidance Note, we identify twelve challenges that make M&E of CCA programmes difficult, and highlight strategies to address each. While most are not unique to CCA, together they present a distinctive package of dilemmas that need to be addressed.”

See also: Bours, Dennis, Colleen McGinn, and Patrick Pringle. 2013. Monitoring and evaluation for climate change adaptation: A synthesis of tools, frameworks and approaches, UKCIP & SeaChange, pdf version (3.4 MB)

See also:  Dennis Bours, Colleen McGinn, Patrick Pringle, 2014, “Guidance Note 2: Selecting indicators for climate change adaptation programming” SEA Change CoP, UKCIP

” This second Guidance Note follows on from that discussion with a narrower question: how does one go about choosing appropriate indicators? We begin with a brief review of approaches to CCA programme design, monitoring, and evaluation (DME). We then go on to discuss how to identify appropriate indicators. We demonstrate that CCA does not necessarily call for a separate set of indicators; rather, the key is to select a medley that appropriately frames progress towards adaptation and resilience. To this end, we highlight the importance of process indicators, and conclude with remarks about how to use indicators thoughtfully and well”

Monitoring and evaluating civil society partnerships

A GSDRC Help Desk response

Request: Please identify approaches and methods used by civil society organisations (international NGOs and others) to monitor and evaluate the quality of their relationships with partner (including southern) NGOs. Please also provide a short comparative analysis.

Helpdesk response

Key findings: This report lists and describes tools used by NGOs to monitor the quality of their relationships with partner organisations. It begins with a brief analysis of the types of tools and their approaches, then describes each tool. This paper focuses on tools which monitor the partnership relationship itself, rather than the impact or outcomes of the partnership. While there is substantial general literature on partnerships, there is less literature on this particular aspect.

Within the development literature, ‘partnership’ is most often used to refer to international or high-income country NGOs partnering with low-income country NGOs, which may be grassroots or small-scale. Much of a ‘north-south’ partnership arrangement centres around funding, meaning accountability arrangements are often reporting and audit requirements (Brehm, 2001). As a result, much of the literature and analysis is heavily biased towards funding and financial accountability. There is a commonly noted power imbalance in the literature, with northern partners controlling the relationship and requiring southern partners to report to them on use of funds. Most partnerships are weak on ensuring Northern accountability to Southern organisations (Brehm, 2001). Most monitoring tools are aimed at bilateral partnerships.

The tools explored in the report are those which evaluate the nature of the partnership, rather than the broader issue of partnership impact. The ‘quality’ of relationships is best described by BOND, in which the highest quality of partnership is described as joint working, adequate time and resources allocated specifically to partnership working, and improved overall effectiveness. Most of the tools use qualitative, perception-based methods including interviewing staff from both partner organisations and discussing relevant findings. There are not many specific tools available, as most organisations rely on generic internal feedback and consultation sessions, rather than comprehensive monitoring and evaluation of relationships. Resultantly, this report only presents six tools, as these were the most referred to by experts.

Full response: http://www.gsdrc.org/docs/open/HDQ1024.pdf

DCED Global Seminar on Results Measurement 24-26 March 2014, Bangkok

Full text available here: http://www.enterprise-development.org/page/seminar2014

“Following popular demand, the DCED is organising the second Global Seminar on results measurement in the field of private sector development (PSD), 24-26 March 2014 in Bangkok, Thailand. The Seminar is being organised in cooperation with the ILO and with financial support from the Swiss State Secretariat for Economic Affairs (SECO). It will have a similar format to the DCED Global Seminar in 2012, which was attended by 100 participants from 54 different organisations, field programmes and governments.

Since 2012, programmes and agencies have been adopting the DCED Standard for results measurement in increasing numbers; recently, several have published the reports of their DCED audit. This Seminar will explore what is currently known, and what we need to know; specifically, the 2014 Seminar is likely to be structured as follows:

  • An introduction to the DCED, its Results Measurement Working Group, the DCED Standard for results measurement and the Standard audit system
  • Insights from 10 programmes experienced with the Standard, based in Bangladesh, Cambodia, Fiji, Georgia, Kenya, Nepal, Nigeria and elsewhere (further details to come)
  • Perspectives from development agencies on results measurement
  • Cross cutting issues, such as the interface between the Standard and evaluation, measuring systemic change, and using results in decision-making
  • A review of the next steps in learning, guidance and experience around the Standard
  • Further opportunities for participants to meet each other, learn about each others’ programmes and make contacts for later follow-up

You are invited to join the Seminar as a participant. Download the registration form here, and send to Admin@Enterprise-Development.org. There is a fee of $600 for those accepted for participation, and all participants must pay their own travel, accommodation and insurance costs. Early registration is advised.”

The Availability of Research Data Declines Rapidly with Article Age

Summarised on SciDevNet, as “Most research data lost as scientists switch storage tech” from this source:

Current Biology, 19 December 2013
Copyright © 2014 Elsevier Ltd All rights reserved.
10.1016/j.cub.2013.11.014

Authors

Highlights

  • We examined the availability of data from 516 studies between 2 and 22 years old
  • The odds of a data set being reported as extant fell by 17% per year
  • Broken e-mails and obsolete storage devices were the main obstacles to data sharing
  • Policies mandating data archiving at publication are clearly needed

Summary

“Policies ensuring that research data are available on public archives are increasingly being implemented at the government [1], funding agency [2,3,4], and journal [5,6] level. These policies are predicated on the idea that authors are poor stewards of their data, particularly over the long term [7], and indeed many studies have found that authors are often unable or unwilling to share their data [8,9,10,11]. However, there are no systematic estimates of how the availability of research data changes with time since publication. We therefore requested data sets from a relatively homogenous set of 516 articles published between 2 and 22 years ago, and found that availability of the data was strongly affected by article age. For papers where the authors gave the status of their data, the odds of a data set being extant fell by 17% per year. In addition, the odds that we could find a working e-mail address for the first, last, or corresponding author fell by 7% per year. Our results reinforce the notion that, in the long term, research data cannot be reliably preserved by individual researchers, and further demonstrate the urgent need for policies mandating data sharing via public archives.”

Rick Davies comment: I suspect the situation with data generated by development aid projects (and their evaluations) is much, much worse. I have been unable to get access to data generated within the last 12 months by one DFID co-funded project in Africa . I am now trying to see if data used in a recent analysis of the (DFID funded) Chars Livelihoods Programme is available.

I am also making my own episodic attempts to make data sets publicly available that have been generated by my own work in the past. One is a large set of hosuehold survey data from Mogadishu in 1986, and another is household survey data from Vietnam generated in 1996 (baseline) and 2006 (follow up). One of the challenges is finding a place on the internet that specialises in making such data available (especially development project data). Any ideas?

PS 2014 01 07: Missing raw data is not the only problem. Lack of contact information about the evaluators/researchers who were associated with the data collection is another one. In their exemplary blog about their use of QCA Raab and Stuppert comment about their search for evaluation reports:

Most of the 74 evaluation reports in our first coding round do not display the evaluator’s or the commissioner’s contact details. In some cases, the evaluators remain anonymous; in other cases, the only e-mail address available in the report is a generic info@xyz.org. This has surprised us – in our own evaluation practice, we always include our e-mail addresses so that our counterparts can get in touch with us in case, say, they wish to work with us again”

PS 2014 02 01 Here is another interesting article about missing data and missing policies about making data available: Troves of Personal Data, Forbidden to Researchers (NYT, May 21, 2012)

“At leading social science journals, there are few clear guidelines on data sharing. “The American Journal of Sociology does not at present have a formal position on proprietary data,” its editor, Andrew Abbott, a sociologist at the University of Chicago, wrote in an e-mail. “Nor does it at present have formal policies enforcing the sharing of data.”

 The problem is not limited to the social sciences. A recent review found that 44 of 50 leading scientific journals instructed their authors on sharing data but that fewer than 30 percent of the papers they published fully adhered to the instructions. A 2008 review of sharing requirements for genetics data found that 40 of 70 journals surveyed had policies, and that 17 of those were “weak.””

Aid on the Edge of Chaos…

… Rethinking International Cooperation in a Complex World

by Ben Ramalingam, Oxford University Press, 2013. Viewable in part via Google Books (and fully searchable with key words)

Publishers summary:

A ground breaking book on the state of the aid business, bridging policy, practice and science. Gets inside the black box of aid to highlight critical flaws in the ways agencies learn, strategise, organise, and evaluate themselves. Shows how ideas from the cutting edge of complex systems science have been used to address social, economic and political issues, and how they can contribute to the transformation of aid. An open accessible style with cartoons by a leading illustrator. Draws on workshops, conferences, over five years of research, and hundreds of interviews.

Rick Davies comments (but not a review): Where to start…? This is a big book, in size and ambition. But also in the breadth of the author’s knowledge and contacts in the field. There have been many reviews of the book, so I will simply link to some here, to start with: Duncan Green (Oxfam), Tom Kirk (LSE), Nick Perkins (AllAfrica), Paul van Gardingen and Andrée Carter (SciDevnet), Melissa Leach (Steps Centre), Owen Barder, Philip Ball , IRIN , New Scientist and Lucy Noonan (Clear Horizon)  See also Ben’s own Aid on the Edge of Chaos blog.

Evauation issues are discussed in two sections:  Watching the Watchman (pages 101-122), and Performance Dynamics, Dynamic Performance (pages 351-356). That is about 7% of the book as a whole, which is a bigger percentage than most development projects spend on evaluation! Of course there is a lot more to Ben’s book that relates to evaluation outside of these sections.

One view of the idea of systems being on the edge of chaos is that it is about organisations (biological and social) evolving to a point where they find a viable balance  between sensitivity to new information and retention of past information (as embedded in existing structures and processes) i.e learning strategies. That said, what strikes me the most about aid organisations, as a sector, is how stable they are. Perhaps way too stable. Mortality rates are very low compared to private sector enterprises. Does this suggest that as a set aid organisations are not as effective at learning as they could be?

I also wondered to what extent the idea of being on the edge of chaos (i.e a certain level of complexity) could be operationalised/measured, and thus developed into something that was more than a metaphor. However, Ben and other authors (Melanie Mitchel) have highlighted the limitations of various attempts to measure complexity. In fact the very attempt to do so, at least in a single (i.e. one dimensional) measure seems somewhat ironical. But perhaps degrees of complexity could be mapped in a space defined by multiple measures? For example: (a) diversity of agents, (b) density of connections between them, (c) the degrees of freedom or agency each agent has. …a speculation.

Ben has been kind enough to quote some of my views on complexity issues, including those on the representation of complexity (page 351). The limitations of linear Theories of Change (ToC) are discussed at various points in the book, and alternatives are explored, including network models and agent based simulation models. While I am sympathetic to their wider use I do continue to be surprised at how little complexity aid agency staff can actually cope with when presented with a ToC that has to be a working part of a Monitoring and Evaluation Framework, for a project. And I have a background concern that the whole enthusiasm for ToCs these days still belies a deep desire for plan-ablity that in reality is at odds with the real world within which aid agencies work.

In his chapter on Dynamic Change Ben describes an initiative called Artificial Intelligence for Development and the attempt to use quantitative approaches  and “big data”  sources to understand more about the dynamics of development (e.g. market movements, migration, and more) as they occur, or at least shortly afterwards. Mobile phone usage being one of the data sets that are becoming more available in many locations around the world. I think this is fascinating stuff, but it is in stark contrast with my experience of the average development project, where there is little in the way of readily available big data that is or could be used for project management and wider lesson learning. Where there is survey data it is rarely publicly available, although the open data and transparency movements are starting to have some effect.

On the more positive side, where data is available, there are new “big data” approaches that agencies can use and adapt. There is now an array of data mining methods that can be used to inductively find patterns (clusters and associations) in data sets, some of which are free and open source (See Rapid Miner). While these searches can be informed by prior theories, they are not necessarily locked in by them – they are open to discovery of unexpected patterns and surprise. Whereas the average ToC is a relatively small and linear construct, data mining software can quickly and systematically explore relationships within much larger sets of attributes/measures describing the interventions, their targets and their wider context.

Some of the complexity science concepts described in the book provide limited added value, in my view. For example, the idea of a fitness landscape, which comes from evolutionary theory. Some of its proposed use, as in chapter 17, is almost a self caricature: “Implementers first need to establish the overall space of possiblities for a given project, programme or policy, then ‘dynamically crawl the design space  by simultaneously trying out design alternatives and then adapting  the project sequentially based on the results” (Pritchett et al). On the other hand, there were some ideas I would definitely like to follow up on, most notably agent based modelling, especially participatory based modeling (pages 175-80, 283-95). Simulations are evaluable, in two ways: by analysis of fit with historic data and accuracy of predictions of future data points. But they do require data, and that perhaps is an issue that could be explored a bit more. When facing uncertain futures and when using a portfolio of strategies to cope with that uncertainty a lot more data is needed than when pursuing a single intervention in a more stable and predictable environment. [end of ramble :-)

 

%d bloggers like this: