How to do a rigorous, evidence – focused literature review in international development

 

A Guidance Note, by
Jessica Hagen-Zanker and Richard Mallett
ODI Working Paper, September 2013
Available as pdf

Abstract: Building on previous reflections on the utility of systematic reviews in international development research, this paper describes an approach to carrying out a literature review that adheres to some of the core principles of ‘full’ systematic reviews, but that also contains space within the process for innovation and reflexivity. We discuss all stages of the review process, but pay particular attention to the retrieval phase, which, we argue, should consist of three interrelated tracks important for navigating difficult ‘information architecture’. We end by clarifying what it is in particular that sets this approach apart from fuller systematic reviews, as well as with some broader thoughts on the nature of ‘the literature review’ within international development and the social sciences more generally. The paper should thus be seen as sitting somewhere between a practical toolkit for those wishing to undertake a rigorous, evidence focused review and a series of reflections on the role, purpose and application of literature reviews in policy research

Data set: 1986 Surveys of disadvantaged areas and groups in Mogadishu

Why:  This post is expected to the beginning of a series of posts that will make a number of data sets that I have accumulated over the year publicly available and thus open to wider analysis and use. It would be useful if this data set could be found a home within a website that specialises in Open Data for development purposes and which includes data generated by NGOs, and not just big organisations such as the World Bank

Background on this data set

In 1986 Oxfam UK, CCFD France and UNICEF funded BOCD Somalia (now known as Progressio) to carry out an extensive survey of disadvantaged areas and groups within Mogadishu, Somalia.

The findings of that survey and related research were documented in a report published in 1988, which is now available as a pdf

The survey results were digitalised in 1986 and have been preserved since then. Although the original software package that was used (Trajectories) is not longer in public use (and was never very easy to use) the original data files can be read by Excel, and with some effort re-formated to make them easy to analyse.

The process of converting the 1986 files into usable Excel files is now underway. Progress will be reported here.

  • Waaberi . This district, shown here on Google Maps, was chosen for use as a comparison group, based on its the middle income status of its households, as identified by a prior household survey. The data set contains information on 36 fields on 218 households. The sampling process is described in page 363 of the above report. The Waaberi Excel file contains the following information:
    • Raw data
    • Questions used and answer codes. The original Somali language survey form, used in the six area focused survey is available here in pdf format
    • References to previous analysis,
    • Terms and Conditions
    • Citation and attribution requirements
  • Gubadley. This was a squatter area to the NE of the city, now shown as Degmada Heliwaa on Google Maps. The data set contains information on 82 fields on 218 households. The sampling process is described in page 363 of the above report. The Gubadley Excel file contains the same supporting information as in the Waaberi file above.
  • Yet to come
    • Areas
      • Yaqshiid (Heegan obbosibo)
      • Karaan (Beesha Shukri obbosibo)
      • Wadajir (Damme Yasin obbosibo)
      • Wadajir (Halane obbosibo)
      • Cabdul Casiis
    • Groups
      • Market women
      • Street children
      • Working children
      • Beggars
      • Prostitutes

Inquiries about these data sets should be directed to rick.davies@gmail.com

 

 

How should we understand “clinical equipoise” when doing RCTs in development?

World Bank Blogs

 Submitted by David McKenzie on 2013/09/02

 While the blog was on break over the last month, a couple of posts caught my attention by discussing whether it is ethical to do experiments on programs that we think we know will make people better off. First up, Paul Farmer on the Lancet Global Health blog writes:

“What happens when people who previously did not have access are provided with the kind of health care that most of The Lancet’s readership takes for granted? Not very surprisingly, health outcomes are improved: fewer children die when they are vaccinated against preventable diseases; HIV-infected patients survive longer when they are treated with antiretroviral therapy (ART); maternal deaths decline when prenatal care is linked to caesarean sections and anti-haemorrhagic agents to address obstructed labour and its complications; and fewer malaria deaths occur, and drug-resistant strains are slower to emerge, when potent anti-malarials are used in combination rather than as monotherapy.

It has long been the case that randomized clinical trials have been held up as the gold standard of clinical research… This kind of study can only be carried out ethically if the intervention being assessed is in equipoise, meaning that the medical community is in genuine doubt about its clinical merits. It is troubling, then, that clinical trials have so dominated outcomes research when observational studies of interventions like those cited above, which are clearly not in equipoise, are discredited to the point that they are difficult to publish”

This was followed by a post by Eric Djimeu on the 3ie blog asks what else development economics should be learning from clinical trials, in which he writes: Continue reading “How should we understand “clinical equipoise” when doing RCTs in development?”

Impact evaluation of natural resource management research programs: a broader view

 

by  John Mayne and Elliot Stern
ACIAR IMPACT ASSESSMENT SERIES 84, 2013
Available as pdf

Foreward

Natural resource management research (NRMR) has a key role in improving food security and reducing poverty and malnutrition. NRMR programs seek to modify natural systems in a sustainable way in order to benefit the lives of those who live and work within these natural systems—especially in rural communities in the developing world.

Evaluating the effectiveness of NRMR through the usual avenues of impact evaluation has posed distinct challenges. Many impact assessments focus on estimating net economic benefits from a project or program, and often are aimed at providing evidence to investors that their funds have been well spent. They have tended to focus on a specific causal evaluation issue: to what extent can a specific (net) impact be attributed to the intervention?

While many evaluations of NRMR programs and their projects will continue to use an impact assessment perspective, this report lays out a complementary approach to NRMR program evaluation. The approach focuses more on helping NRMR managers and stakeholders to learn about their interventions and to understand why and how outcomes and impacts have been realised (or, in some cases, have not). Thus, a key aim here is to position NRMR impact evaluation as a learning process undertaken to improve the delivery and effectiveness of NRMR programs by developing a new framework for thinking about and designing useful and practical evaluations.

The emphasis on learning follows from the view of NRMR as operating under dynamic, emergent, complex and often unpredictable human and ecological conditions. In such a setting, adaptive management informed by careful responses to new information and understanding is essential for building and managing more-effective programs and interventions. This is highlighted by examining some specific examples: the CGIAR Research Program on Aquatic Agricultural Systems (led by Worldfish), CGIAR’s Ganges Basin Development Challenge, and CSIRO–AusAID’s African Food Security Initiative.

The alternative approach presented here is another tool to use in the search for understanding of how and why impacts occur in a research, development and extension environment. We hope that the learning-orientated evaluation described will help elucidate more soundly based explanations that will guide researchers in replicating, scaling up and improving future programs.

The Impact and Effectiveness of Transparency and Accountability Initiatives

Development Policy Review, July 2013. Special open access issue
Volume 31, Issue Supplement. Pages 3–124

  1. The Impact of Transparency and Accountability Initiatives (pages s3–s28) John Gaventa and Rosemary McGee

  2. Do They Work? Assessing the Impact of Transparency and Accountability Initiatives in Service Delivery (pages s29–s48) Anuradha Joshi

  3. Improving Transparency and Accountability in the Budget Process: An Assessment of Recent Initiatives (pages s49–s67) Ruth Carlitz

  4. The Impact and Effectiveness of Transparency and Accountability Initiatives: Freedom of Information (pages s69–s87)Richard Calland and Kristina Bentley

  5. The Impact and Effectiveness of Accountability and Transparency Initiatives: The Governance of Natural Resources (pages s89–s105)Andrés Mejía Acosta

  6. Aid Transparency and Accountability: ‘Build It and They’ll Come’? (pages s107–s124)Rosemary McGee

How Feedback Loops Can Improve Aid (and Maybe Governance)

Center for Global Development Essay (available as pdf)
Dennis Whittle
August  2013

Abstract
“If private markets can produce the iPhone, why can’t aid organizations create and implement development initiatives that are equally innovative and sought after by people around the world? The key difference is feedback loops. Well-functioning private markets excel at providing consumers with a constantly improving stream of high-quality products and services. Why? Because consumers give companies constant feedback on what they like and what they don’t. Companies that listen to their
consumers by modifying existing products and launching new ones have a chance of increasing their revenues and profits; companies that don’t are at risk of going out of business. Is it possible to create analogous mechanisms that require aid organizations to listen to what regular citizens want—and then act on what they hear?
This essay provides a set of principles that aid practitioners can use to design feedback loops with a higher probability of success.”

Rick Davies comment: A few quotes that interested me, within a paper that was interesting as a whole:

  • “Anyone who has managed aid projects realizes that there is a huge number of design and implementation parameters—and that it is maddeningly difficult to know which of these makes the difference between success and failure. In the preparation phase, we tend to give a lot of weight to the salience of certain factors, such as eligibility criteria, prices, technical features, and so on. But during implementation, we realize that a thousand different factors affect outcomes—the personality of the project director, internal dynamics within the project team, political changes in the local administration, how well the project is explained to local people, and even bad weather can have major effects.  ” This presents major challenges to any efforts to successfully transfer the findings of an impact evaluation to other contexts – aka the problem of limited external valdity
  • “The good news is that recent technological breakthroughs are enabling us to dramatically increase our ability to find out what people like the Indonesian rubber farmer really want— and whether they are getting it. ”  Ground Hog Day? I suspect the same optimistic thoughts went through the minds of early developers and users of PRA (participatory rural appraisal) in the 1980s and early 1990s :-) The same themes of experts versus the people but this time with more of a focus on technology rather than participatory processes.
  • The paper ends with a list of five useful research questions, at least four of which would have been well posed to, and probably by, PRA practicioners decades ago.
    • How do we provide incentives for broad-based feedback?
    • How do we know that feedback is representative of the entire population?
    • How do we combine the wisdom of the crowds with the broad perspective and experience of experts?
    • How do we ensure there are strong incentives for aid providers, governments, and implementing agencies to adopt and act on feedback mechanisms?
    • What is the relationship between effective feedback loops in aid and democratic governance?
  • It would be good if the author could include some reflection on how these recent developments improve on what was done in the past with participatory methods. Otherwise I will be inclined to feel the article actually reflects our lack of progress over the past decades.

INTRAC M&E Workshop: Practical Responses to Current Monitoring and Evaluation Debates

A one-day workshop for M&E practitioners, civil society organisations and development agencies to debate and share their experiences. There is an increased pressure on NGOs to improve their M&E systems and often to move out of their methodological comfort zone to meet new requirements from donors and stakeholders. This event will examine the challenges faced by NGOs and their responses around four themes:

  •  Designing and using baselines for complex programmes
  • Using Information and Communications Technology (ICT) in M&E
  • Experimental and quasi-experimental methods in M&E, including randomised control trials
  • M&E of advocacy

Download the overview paper.

Call for M&E case studies

We are looking for short case studies focusing on one or more of the four event themes (see above). The case studies will be shared and will form the basis of the discussions at the workshop.

*Deadline for abstracts (max. 500 words): Friday 13 September 2013*

Please email abstracts to research@intrac.org

Event bookings

Event cost: £80 (£60 early bird booking before 19 October 2013)

Please return the booking form to zwilkinson@intrac.org

 

The Mixed Methods Approach to Evaluation

Michael Bamberger, Social Impact Concept Note Series No.1, June 2013

Available as pdf

Executive summary
“Over the past decade there has been an increased demand for mixed-methods evaluations to better understand the complexity of international development interventions and in recognition of the fact that no single evaluation methodology can fully capture and measure the multiple processes and outcomes that every development program involves. At the same time, no consensus has been reached by policy makers and evaluation practitioners as to what exactly constitutes a mixed-methods approach.
This SI Concept Note aims at helping that discussion by defining mixed-methods as evaluation approaches that systematically integrate quantitative and qualitative research methodologies at all stages of an evaluation. The paper further discusses the most important strengths and weaknesses of mixed-methods approaches compared to quantitative and qualitative only evaluations and lists a number  of implementation challenges and ways to address them that may be useful to both producers and consumers of performance and impact evaluations.”

 

Monitoring the composition and evolution of the research networks of the CGIAR Research Program (RTB)

“The ILAC Initiative of the CGIAR has been working in partnership with the CGIAR Research Program on Roots, Tubers and Bananas (RTB) on a study that mapped RTB research network.

The study aimed to design and test a monitoring system to characterize research networks through which research programs activities are conducted. This information is an important tool for the adaptive management of the CGIAR Research Programs and a complement to the CGIAR management system. With few adaptations, the monitoring system can be useful for a wide range of organizations, including donors, development agencies and NGOs.

The next activity of the RTB – ILAC partnership will be the development of procedures to monitor how the research networks change over time

ILAC has produced a full report of the study, and also a Brief, with more condensed information.

·         Full report: Ekboir, J., Canto, G.B. and Sette, C. (2013) Monitoring the composition and evolution of the research networks of the CGIAR Research Program on Roots, Tubers and Bananas (RTB). Series on Monitoring Research Networks No. 01. Rome, Institutional Learning and Change (ILAC) Initiative

·         Brief: Ekboir, J., Canto, G.B. and Sette, C. (2013) Monitoring the composition and evolution of the research networks of the CGIAR Research Program on Roots, Tubers and Bananas (RTB). ILAC Brief No. 27. Rome, Institutional Learning and Change (ILAC) Initiative”

Participatory Impact Assessment (1998)

Participatory Impact Assessment: A Report on a DFID Funded ActionAid Research Project on Methods and Indicators for Measuring the Impact of Poverty Reduction. Goyder, Hugh, Rick Davies, Winkie Williamson, and ActionAid (Organization). 1998. ActionAid.

This does not seem to be available online anywhere, so I have scanned a copy to make it available as a pdf

It is posted here as an item of historical interest, not as a reflection of what could or should be cutting edge practice in 2013!

BEWARE: IT IS A VERY LARGE FILE

Contents pages:

This 1998 report was based on a set of documents produced by Rick Davies and Winkie Williamson, following their field visits to Bangladesh, India, Ghana and Uganda. The two main summary documents were:
  • Summary-Report: Methods and Indicators for  Measuring the Impact of Poverty Reduction:  An ODA Funded ActionAid Research Project.Rick Davies and Winkie Williamson, 1997

The four country specific reports are available on request

%d bloggers like this: