Rapid needs assessments: Severity and priority measures

by Aldo Benini (received 8th October 2013)  abenini@starpower.net     http://aldo-benini.org/

“Rapid assessments after disasters gauge the intensity of unmet needs across various spheres of life, commonly referred to as “sectors”. Sometimes two different measures of needs are used concurrently – a “severity score” independently given in each sector and a “priority score”, a relative measure comparing levels of needs to those of other sectors. Needs in every assessed locality are thus scored twice.

“Severity and priority – Their measurement in rapid needs assessments” clarifies the conceptual relationship. Aldo Benini wrote this note for the Assessment Capacities Project (ACAPS) in Geneva following the Second Joint Rapid Assessment of Northern Syria (J-RANS II) in May 2013. It investigates the construction and functioning of severity and priority scales, using data from Syria as well as from an earlier assessment in Yemen. In both assessments, the severity scales differentiated poorly. Therefore an artificial dataset was created to simulate what associations can realistically be expected between severity and priority measures. The note discusses several alternative measurement formulations and the logic of comparisons among sectors and among affected locations.

Readers find the note as well as files needed to replicate the simulation here; the author welcomes new ideas for the measurement of the severity and priority of needs in general and improvements to the simulation code in particular.”

Planning Evaluability Assessments: A Synthesis of the Literature with Recommendations

 

Report of a study commissioned by the Department for International Development
DFID Working Paper No. 40. By Rick Davies, August 2013. Available as pdf
See also the DFID website:https://www.gov.uk/government/publications/planning-evaluability-assessments

[From the Executive Summary] “The purpose of this synthesis paper is to produce a short practically oriented report that summarises the literature on Evaluability Assessments, and highlights the main issues for consideration in planning an Evaluability Assessment. The paper was commissioned by the Evaluation Department of the UK Department for International Development (DFID) but intended for use both within and beyond DFID.

The synthesis process began with an online literature search, carried out in November 2012. The search generated a bibliography of 133 documents including journal articles, books, reports and web pages, published from 1979 onwards. Approximately half (44%) of the documents were produced by international development agencies. The main focus of the synthesis is on the experience of international agencies and on recommendations relevant to their field of work.

Amongst those agencies the following OECD DAC definition of evaluability is widely accepted and has been applied within this report: “The extent to which an activity or project can be evaluated in a reliable and credible fashion”.

Eighteen recommendations about the use of Evaluability Assessments are presented here [in the Executive Summary], based on the synthesis of the literature in the main body of the report. The report is supported by annexes, which include an outline structure for Terms of Reference for an Evaluability Assessment.]

The full bibliography referred to in the study can be found online here: http://mande.co.uk/wp-content/uploads/2013/02/Zotero-report.htm

 

Postscript: A relevant xkcd perspective?

 

Evaluation in Violently Divided Societies: Politics, Ethics and Methods

Journal of Peacebuilding & Development
Volume 8, Issue 2 (2013)
Guest Editors: Kenneth Bush and Colleen Duggan

“Those who work in support of peacebuilding and development initiatives are acutely aware that conflict-affected environments are volatile, unpredictable and fast-changing. In light of this reality, evaluation and research in the service of peacebuilding and  development is a complex enterprise. Theories of change and assumptions about how peace and development work are often unarticulated or untested. While much work continues to be done on the theories, methodologies and praxis of peacebuilding, we suggest that the international aid community, researchers and practitioners need to think more deeply and systematically about the role of evaluation in increasing the efficacy of projects and programmes in violently divided societies (VDS).

Core questions that underpin and motivate the articles contained in this special issue include.

• How does the particular context of conflict affect our approaches to, and conduct of, research and evaluation?

• Specifically, how do politics — be they local, national, international, geopolitical — interact with evaluation practice in ways that enhance or inhibit prospects for peace and sustainable development?

• What can we learn from current research and evaluation practice in the global North and South about their impacts in VDS?

• Which tools are most effective and appropriate for assessing the role of context? Should there be generic or global assessment frameworks, criteria and indicators to guide evaluation in VDS, and, if so, what do they look like? Or does the fluidity and heterogeneity of different conflict zones inhibit such developments?

• How can evaluation, in its own right, catalyse positive political and societal change? What theories of peacebuilding and social change should best guide evaluation research and practice in ways that promote peace and sustainable development?

How to do a rigorous, evidence – focused literature review in international development

 

A Guidance Note, by
Jessica Hagen-Zanker and Richard Mallett
ODI Working Paper, September 2013
Available as pdf

Abstract: Building on previous reflections on the utility of systematic reviews in international development research, this paper describes an approach to carrying out a literature review that adheres to some of the core principles of ‘full’ systematic reviews, but that also contains space within the process for innovation and reflexivity. We discuss all stages of the review process, but pay particular attention to the retrieval phase, which, we argue, should consist of three interrelated tracks important for navigating difficult ‘information architecture’. We end by clarifying what it is in particular that sets this approach apart from fuller systematic reviews, as well as with some broader thoughts on the nature of ‘the literature review’ within international development and the social sciences more generally. The paper should thus be seen as sitting somewhere between a practical toolkit for those wishing to undertake a rigorous, evidence focused review and a series of reflections on the role, purpose and application of literature reviews in policy research

How should we understand “clinical equipoise” when doing RCTs in development?

World Bank Blogs

 Submitted by David McKenzie on 2013/09/02

 While the blog was on break over the last month, a couple of posts caught my attention by discussing whether it is ethical to do experiments on programs that we think we know will make people better off. First up, Paul Farmer on the Lancet Global Health blog writes:

“What happens when people who previously did not have access are provided with the kind of health care that most of The Lancet’s readership takes for granted? Not very surprisingly, health outcomes are improved: fewer children die when they are vaccinated against preventable diseases; HIV-infected patients survive longer when they are treated with antiretroviral therapy (ART); maternal deaths decline when prenatal care is linked to caesarean sections and anti-haemorrhagic agents to address obstructed labour and its complications; and fewer malaria deaths occur, and drug-resistant strains are slower to emerge, when potent anti-malarials are used in combination rather than as monotherapy.

It has long been the case that randomized clinical trials have been held up as the gold standard of clinical research… This kind of study can only be carried out ethically if the intervention being assessed is in equipoise, meaning that the medical community is in genuine doubt about its clinical merits. It is troubling, then, that clinical trials have so dominated outcomes research when observational studies of interventions like those cited above, which are clearly not in equipoise, are discredited to the point that they are difficult to publish”

This was followed by a post by Eric Djimeu on the 3ie blog asks what else development economics should be learning from clinical trials, in which he writes: Continue reading “How should we understand “clinical equipoise” when doing RCTs in development?”

Impact evaluation of natural resource management research programs: a broader view

 

by  John Mayne and Elliot Stern
ACIAR IMPACT ASSESSMENT SERIES 84, 2013
Available as pdf

Foreward

Natural resource management research (NRMR) has a key role in improving food security and reducing poverty and malnutrition. NRMR programs seek to modify natural systems in a sustainable way in order to benefit the lives of those who live and work within these natural systems—especially in rural communities in the developing world.

Evaluating the effectiveness of NRMR through the usual avenues of impact evaluation has posed distinct challenges. Many impact assessments focus on estimating net economic benefits from a project or program, and often are aimed at providing evidence to investors that their funds have been well spent. They have tended to focus on a specific causal evaluation issue: to what extent can a specific (net) impact be attributed to the intervention?

While many evaluations of NRMR programs and their projects will continue to use an impact assessment perspective, this report lays out a complementary approach to NRMR program evaluation. The approach focuses more on helping NRMR managers and stakeholders to learn about their interventions and to understand why and how outcomes and impacts have been realised (or, in some cases, have not). Thus, a key aim here is to position NRMR impact evaluation as a learning process undertaken to improve the delivery and effectiveness of NRMR programs by developing a new framework for thinking about and designing useful and practical evaluations.

The emphasis on learning follows from the view of NRMR as operating under dynamic, emergent, complex and often unpredictable human and ecological conditions. In such a setting, adaptive management informed by careful responses to new information and understanding is essential for building and managing more-effective programs and interventions. This is highlighted by examining some specific examples: the CGIAR Research Program on Aquatic Agricultural Systems (led by Worldfish), CGIAR’s Ganges Basin Development Challenge, and CSIRO–AusAID’s African Food Security Initiative.

The alternative approach presented here is another tool to use in the search for understanding of how and why impacts occur in a research, development and extension environment. We hope that the learning-orientated evaluation described will help elucidate more soundly based explanations that will guide researchers in replicating, scaling up and improving future programs.

The Impact and Effectiveness of Transparency and Accountability Initiatives

Development Policy Review, July 2013. Special open access issue
Volume 31, Issue Supplement. Pages 3–124

  1. The Impact of Transparency and Accountability Initiatives (pages s3–s28) John Gaventa and Rosemary McGee

  2. Do They Work? Assessing the Impact of Transparency and Accountability Initiatives in Service Delivery (pages s29–s48) Anuradha Joshi

  3. Improving Transparency and Accountability in the Budget Process: An Assessment of Recent Initiatives (pages s49–s67) Ruth Carlitz

  4. The Impact and Effectiveness of Transparency and Accountability Initiatives: Freedom of Information (pages s69–s87)Richard Calland and Kristina Bentley

  5. The Impact and Effectiveness of Accountability and Transparency Initiatives: The Governance of Natural Resources (pages s89–s105)Andrés Mejía Acosta

  6. Aid Transparency and Accountability: ‘Build It and They’ll Come’? (pages s107–s124)Rosemary McGee

How Feedback Loops Can Improve Aid (and Maybe Governance)

Center for Global Development Essay (available as pdf)
Dennis Whittle
August  2013

Abstract
“If private markets can produce the iPhone, why can’t aid organizations create and implement development initiatives that are equally innovative and sought after by people around the world? The key difference is feedback loops. Well-functioning private markets excel at providing consumers with a constantly improving stream of high-quality products and services. Why? Because consumers give companies constant feedback on what they like and what they don’t. Companies that listen to their
consumers by modifying existing products and launching new ones have a chance of increasing their revenues and profits; companies that don’t are at risk of going out of business. Is it possible to create analogous mechanisms that require aid organizations to listen to what regular citizens want—and then act on what they hear?
This essay provides a set of principles that aid practitioners can use to design feedback loops with a higher probability of success.”

Rick Davies comment: A few quotes that interested me, within a paper that was interesting as a whole:

  • “Anyone who has managed aid projects realizes that there is a huge number of design and implementation parameters—and that it is maddeningly difficult to know which of these makes the difference between success and failure. In the preparation phase, we tend to give a lot of weight to the salience of certain factors, such as eligibility criteria, prices, technical features, and so on. But during implementation, we realize that a thousand different factors affect outcomes—the personality of the project director, internal dynamics within the project team, political changes in the local administration, how well the project is explained to local people, and even bad weather can have major effects.  ” This presents major challenges to any efforts to successfully transfer the findings of an impact evaluation to other contexts – aka the problem of limited external valdity
  • “The good news is that recent technological breakthroughs are enabling us to dramatically increase our ability to find out what people like the Indonesian rubber farmer really want— and whether they are getting it. ”  Ground Hog Day? I suspect the same optimistic thoughts went through the minds of early developers and users of PRA (participatory rural appraisal) in the 1980s and early 1990s :-) The same themes of experts versus the people but this time with more of a focus on technology rather than participatory processes.
  • The paper ends with a list of five useful research questions, at least four of which would have been well posed to, and probably by, PRA practicioners decades ago.
    • How do we provide incentives for broad-based feedback?
    • How do we know that feedback is representative of the entire population?
    • How do we combine the wisdom of the crowds with the broad perspective and experience of experts?
    • How do we ensure there are strong incentives for aid providers, governments, and implementing agencies to adopt and act on feedback mechanisms?
    • What is the relationship between effective feedback loops in aid and democratic governance?
  • It would be good if the author could include some reflection on how these recent developments improve on what was done in the past with participatory methods. Otherwise I will be inclined to feel the article actually reflects our lack of progress over the past decades.

The Mixed Methods Approach to Evaluation

Michael Bamberger, Social Impact Concept Note Series No.1, June 2013

Available as pdf

Executive summary
“Over the past decade there has been an increased demand for mixed-methods evaluations to better understand the complexity of international development interventions and in recognition of the fact that no single evaluation methodology can fully capture and measure the multiple processes and outcomes that every development program involves. At the same time, no consensus has been reached by policy makers and evaluation practitioners as to what exactly constitutes a mixed-methods approach.
This SI Concept Note aims at helping that discussion by defining mixed-methods as evaluation approaches that systematically integrate quantitative and qualitative research methodologies at all stages of an evaluation. The paper further discusses the most important strengths and weaknesses of mixed-methods approaches compared to quantitative and qualitative only evaluations and lists a number  of implementation challenges and ways to address them that may be useful to both producers and consumers of performance and impact evaluations.”

 

Monitoring the composition and evolution of the research networks of the CGIAR Research Program (RTB)

“The ILAC Initiative of the CGIAR has been working in partnership with the CGIAR Research Program on Roots, Tubers and Bananas (RTB) on a study that mapped RTB research network.

The study aimed to design and test a monitoring system to characterize research networks through which research programs activities are conducted. This information is an important tool for the adaptive management of the CGIAR Research Programs and a complement to the CGIAR management system. With few adaptations, the monitoring system can be useful for a wide range of organizations, including donors, development agencies and NGOs.

The next activity of the RTB – ILAC partnership will be the development of procedures to monitor how the research networks change over time

ILAC has produced a full report of the study, and also a Brief, with more condensed information.

·         Full report: Ekboir, J., Canto, G.B. and Sette, C. (2013) Monitoring the composition and evolution of the research networks of the CGIAR Research Program on Roots, Tubers and Bananas (RTB). Series on Monitoring Research Networks No. 01. Rome, Institutional Learning and Change (ILAC) Initiative

·         Brief: Ekboir, J., Canto, G.B. and Sette, C. (2013) Monitoring the composition and evolution of the research networks of the CGIAR Research Program on Roots, Tubers and Bananas (RTB). ILAC Brief No. 27. Rome, Institutional Learning and Change (ILAC) Initiative”

%d bloggers like this: