A Review of Umbrella Fund Evaluation – Focusing on Challenge Funds

This is a Specialist Evaluation and Quality Assurance Service – Service Request Report, authored by Lydia Richardson with David Smith and Charlotte Blundy of TripleLine, in October 2015. A pdf copy is available

As this report points out, Challenge Funds are common means of funding development aid projects but they have not received the evaluation attention they deserve.  In this TripleLine study the authors collated information 56 such funds. .”One of the key findings was that of the 56 funds, only 11 (19.6%) had a document entitled „impact assessment?, of these 7 have been published. Looking through these, only one (Chars Livelihood Programme) appears to be close to DFID?s definition of impact evaluation, although this programme is not considered to be a true challenge fund according to the definition outlined in the introduction. The others assess impact but do not necessarily fit DFID?s 2015 definition of impact evaluation”

Also noted later in the text:… “An email request for information on evaluation of challenge funds was sent to fund and evaluation managers. This resulted in just two responses from 11 different organisations. This verifies the finding that there is very little evaluation of challenge funds available in the public domain”….”Evaluation was in most cases not incorporated into the fund’s design”.

“This brief report focuses on the extent to which challenge funds are evaluable. It unpacks definitions of the core terms used and provides some analysis and guidance for those commissioning evaluations. The guidance is also relevant for those involved in designing and managing challenge funds to make them more evaluable”

Contents:
1. Introduction
1 2. Methods used
1 2.1 Limitations of the review
2 3. Summary of findings of the scoping phase
2 3.1 Understanding evaluability
4 3.2 Typology for DFID Evaluations
4 4. Understanding the challenge fund aid modality
5 4.1 Understanding the roles and responsibilities in the challenge fund model
5 4.2 Understanding the audiences and purpose of the evaluation
6 4.3 Aligning the design of the evaluation to the design of the challenge fund
7 5. What evaluation questions are applicable?
8 5.1 Relevance
9 5.2 Efficiency
9 5.3 Effectiveness
10 5.4 Impact
10 5.5 Sustainability
12 6. The rigour and appropriateness of challenge fund evaluations
13 6.1 The use of theory of change.
13 6.2 Is a theory based evaluation relevant and possible?
13 6.3 Measuring the counterfactual and assessing attribution
14 6.4 The evaluation process and institutional arrangements
16 6.5 Multi-donor funds
16 6.6 Who is involved?
16 7. How data can be aggregated
17 8. Working in fragile and conflict affected states.
18 9. Trends
18 10. Gaps
20 11. Conclusions

Rick Davies Comment: .While projects funded by Challenge Funds are often evaluated, sometimes as a requirement of their funding, it seems that the project selection and funding process itself is is not given the same level of scrutiny. By this I mean the process whereby candidate projects are screened, assessed and then chosen or not, for funding. This process involves consideration of multiple criteria, including adherence to legal requirements, strategic alignment and grantees capacity to implement activities and achieve objectives. This is where the Challenge Fund Theory of Change actually gets operationalised. It should be possible to test the (tacit and explicit) theory being implemented at this point  by gathering data on the subsequent performance of the funded projects. There should be some form of consistent association between attributes of highly rated project proposals (versus lowly rated proposals) and the scale of their achievements when implemented. If there is not, then it suggests that the proposal screening process is not delivering value and that random choice would be cheaper and just as effective. One experience I have had of this kind of analysis was not very encouraging. We could not find any form of consistent association between project attributes noted during project selection and the scale of subsequent achievement. But perhaps with more comprehensive recording of data collected at the project assessment stage the analysis might have delivered more encouraging results…

PS: This report was done using 14 person days, which is a tight budget given the time need to collate data, let alone analyse it. A good report, especially considering these time constraints

USAID: A GUIDE TO THE MODIFIED BASIC NECESSITIES SURVEY WHY AND HOW TO CONDUCT BNS IN CONSERVATION LANDSCAPES

Published June 2015. The principle authors of this guide are Dr. David Wilkie, Dr. Michelle Wieland and Diane Detoeuf of WCS. With thanks to Dr. Rick Davies for many useful discussions
and comments about adding the value of owned assets to the BNS (the modifcation). Available as pdf

“This manual is offered as a practical guide to implementing the Basic Necessities Survey (BNS) that was originally developed by Rick Davies (http://mande.co.uk/special-issues/the-basic-necessities-survey/), and was recently modified and then field tested by WCS. The modified Basic Necessities Survey is imperfect, in that it does not attempt to answer all questions that could be asked about the impact of conservation (or development) actions on people’s well-being. But it is the perfect core to a livelihoods monitoring program, because it provides essential information about people’s well-being from their perspective over time, and implementing a modified BNS is easy enough that it does not preclude gathering additional household information that a conservation project feels they need to adaptively manage their activities”

“This technical manual was developed to offer conservation practitioners with limited budgets and staff a simple, practical, low-cost, quantitative approach to measuring and tracking trends in people’s well-being, and to link these measures where possible to the use and conservation of natural resources.”

“This approach is not based on the assumption that people are doing well if they make more than 1-2 dollars per day, or are in poverty if they make less. Rather, it is based on the understanding that people themselves are best able to decide what constitutes well-being. The approach is based on a United Nations definition of poverty as a lack of basic necessities. More specifically the approach asks communities to define what goods and services are necessary for a family to meet their basic needs. Examples of goods include material items such as: an axe, mobile phone, bed, or cook-stove. Services can include: access to clean drinking within 15 minutes’ walk, reasonable walking distance to health care, children attending school,
women participating in community decision making, or absence of domestic violence, etc. Families who do not own or have access to this basket of goods and services are, by community definition, not meeting a basic, minimum standard of well-being and thus according to the community-defined are poor (i.e., living below the community defined poverty line).”

Rick Davies comment: It has been gratifying to see WCS pick up on the value of the BNS and make its potential more widely known via this USAID publication. I would like to highlight two other potentially useful modifications/uses of the BNS. One is how to establish a community-defined poverty line within the distribution of BNS scores collected in a given community, thus enabling a “head count” measure of poverty. This is described on pages 31-37 of this 2007 report for the Ford Foundation in Vietnam. The other is how to extract from BNS data a simple prediction rule that succinctly summarise what survey responses best predict the overall poverty status of a given household. That method is described in this June 2013 issue of the EES Connections newsletter (pages 12-14)

 

Power calculation for causal inference in social science: Sample size and minimum detectable effect determination

Eric W Djimeu, Deo-Gracias Houndolo, 3ie Working Paper 26, March 2016. Available as pdf

Contents
1. Introduction
2. Basic statistics concepts: statistical logic
3. Power calculation: concept and applications
3.1. Parameters required to run power calculations
3.2. Statistical power and sample size determination
3.3. How to run power calculation: single treatment or multiple treatments?
4. Rules of thumb for power calculation
5. Common pitfalls in power calculation
6. Power calculations in the presence of multiple outcome variables
7. Experimental design
7.1. Individual-level randomisation
7.2. Cluster-level randomisation

1. Introduction

Since the 1990s, researchers have increasingly used experimental and quasi-experimental
primary studies – collectively known as impact evaluations – to measure the effects of
interventions, programmes and policies in low- and middle-income countries. However, we are
not always able to learn as much from these studies as we would like. One common problem is
when evaluation studies use sample sizes that are inappropriate for detecting whether
meaningful effects have occurred or not. To overcome this problem, it is necessary to conduct
power analysis during the study design phase to determine the sample size required to detect
the effects of interest. Two main concerns support the need to perform power calculations in
social science and international development impact evaluations: sample sizes can be too small
and sample sizes can be too large.

In the first case, power calculation helps to avoid the consequences of having a sample that is
too small to detect the smallest magnitude of interest in the outcome variable. Having a sample
size smaller than statistically required increases the likelihood of researchers concluding that
the evaluated intervention has no impact when the intervention does, indeed, cause a significant
change relative to a counterfactual scenario. Such a finding might wrongly lead policymakers to
cancel a development programme, or make counterproductive or even harmful changes in
public policies. Given this risk, it is not acceptable to conclude that an intervention has no
impact when the sample size used for the research is not sufficient to detect a meaningful
difference between the treatment group and the control group.

In the second case, evaluation researchers must be good stewards of resources. Data
collection is expensive and any extra unit of observation comes at a cost. Therefore, for costefficiency and value-for-money it is important to ensure that an evaluation research design does
not use a larger sample size than is required to detect the minimum detectable effect (MDE)
of interest. Researchers and funders should therefore use power calculations to determine the
appropriate budget for an impact evaluation study.

Sample size determination and power calculation can be challenging, even for researchers
aware of the problems of small sample sizes and insufficient power. 3ie developed this resource
to help researchers with their search for the optimal sample size required to detect an MDE in
the interventions they evaluate.

The manual provides straightforward guidance and explains the process of performing power
calculations in different situations. To do so, it draws extensively on existing materials to
calculate statistical power for individual and cluster randomised controlled trials. More
specifically, this manual relies on Hayes and Bennett (1999) for cluster randomised controlled
trials and documentation from Optimal Design software version 3.0 for individual randomised
controlled trials.

Evaluating the impact of flexible development interventions

ODI Methods Lab  report. . March 2016 Rick Davies. Available as pdf

“Evaluating the impact of projects that aim to be flexible and responsive is a challenge. One of the criteria for good impact evaluation is rigour – which, broadly translated, means having a transparent, defensible and replicable process of data collection and analysis. And its debatable apotheosis is the use of randomised control trials (RCTs). Using RCTs requires careful management throughout the planning, implementation and evaluation cycle of a development intervention. However, these requirements for control are the antithesis of what is needed for responsive and adaptive programming. Less demanding and more common alternatives to RCTs are theory-led evaluations using mixed methods. But these can also be problematic because ideally a good theory contains testable hypotheses about what will happen, which are defined in advance.

Is there a middle way, between relying on pre-defined testable theories of change and abandoning any hope altogether that they can cope with the open-ended nature of development?

Drawing on experiences of the Australia-Mekong NGO Engagement Platform and borrowing from the data-centred approaches of the commercial sector, this paper argues that there is a useful role for ‘loose’ theories of change and that they can be evaluable”

Key messages:

• For some interventions, tight and testable theories of change are not appropriate – for example, in fast moving humanitarian emergencies or participatory development programmes, a more flexible approach is needed.

• However, it is still possible to have a flexible project design and to draw conclusions about causal attribution. This middle path involves ‘loose’ theories of change, where activities and outcomes may be known, but the likely causal links between them are not yet clear.

• In this approach, data is collected ‘after the event’ and analysed across and within cases, developing testable models for ‘what works’. More data will likely be needed than for projects with a ‘tight’ theory of change, as there is a wider range of relationships between interventions and outcomes to analyse. The theory of change still plays an important role, in guiding the selection of data types.

• While loose theories of change are useful to identify long term impacts, this approach can also support short cycle learning about the effectiveness of specific activities being implemented within a project’s lifespan

Learning about Analysing Networks to Support Development Work?

Simon Batchelor, IDS Practice Paper in Brief. July 2011. Available as pdf

“Introduction Everyone seems to be talking about networks. Networks and the analysis of networks is now big business. However, in the development sector, analysis of networks remains weak.

This paper presents four cases where social network analysis (SNA) was used in a development programme. It focuses not so much on the organisational qualities of networks nor on the virtual networks facilitated by software, but on the analysis of connectivity in real world networks. Most of the cases are unintentional networks. What literature there is on network analysis within the development sector tends to focus on intentional networks and their quality. Our experience suggests there is considerable benefit to examining and understanding the linkages in unintentional networks, and this is a key part of this Practice Paper.

The four cases illustrate how social network analysis can

• Identify investments in training, and enable effective targeting of capacity building.

• Analyse a policy environment for linkages between people, and enable targeted interventions.

• Analyse an emerging policy environment, and stimulate linkages between different converging sectors.

• Look back on and understand the flow of ideas, thereby learning about enabling an environment for innovation.

These cases, while not directly from the intermediary sector, potentially inform our work with the intermediary sector.

 

Basic Field Guide to the Positive Deviance Approach

Tufts University, September 2010. 17 pages Available as pdf

“This basic guide is to orient newcomers to the PD approach and provide the essential tools to get started. It includes a brief description of basic definitions, as well as the guiding principles, steps, and process characteristics. This guide also includes suggestions of when to use the PD approach, facilitation tips, and outlines possible challenges. These elements will help practitioners implement successful PD projects. Please use this guide as a resource to initiate the PD approach. Its brevity and simplicity are meant to invite curious and intrepid implementers who face complex problems requiring behavioral and social change. It is suitable for those who seek solutions that exist today in their community and enables
the practitioner to leverage those solutions for the benefit of all members of the community. PD is best understood through action and is most effective through practice.”

Rick Davies comment: I would be interested to see if anyone has tried to combine MSC with Positive Deviance approaches. MSC can be seen as a scanning process whereas PD seems to involve more in-depth inquiry, and one can imagine that combining both could be especially fruitful.

PS1: Positive Deviants can be found within an existing data set by using predictive modeling to find attributes which are good predictors of the outcome(s) being absent, then examining the False Positives – which will be cases where the outcome occurred despite the contrary conditions.

PS2: Whenever you have a great new idea its always worth checking to see who else has already been there and one that :-) So, lo and behold, I have just found that others have already been exploring the overlap between prediction modeling (aka predictive analytics) and Positive Deviance. See: Big Data with a Personal Touch: The Convergence of Predictive Analytics and Positive Deviance

More generally, for more information about Positive Deviance as a method of inquiry see:

Participatory Video and the Most Significant Change: a guide for facilitators

by Sara Asadullah & Soledad Muñiz, 2015 Available as pdf via this webpage

“The toolkit is designed to support you in planning and carrying out evaluation using PV with the MSC technique, or PVMSC for short. This is a participatory approach to monitoring, evaluation and learning that amplifies the voices of participants and helps organisations to better understand and improve their programmes”

Rick Davies comment:‘The advice on handling what can be quite emotional moments when people tell
stories that matter to them is well said, and is often not covered in text or training introductions to MSC. The advice on taking care with editing video records of MSC stories is also good, addressing an issue that has always niggled me.’

Contents

INTRODUCTION 8
PREFACE 10
GUIDE TO USING THE TOOLKIT 12
PART ONE: 14

What is Participatory Monitoring and Evaluation? 14
What is Participatory Video? 14
Participatory Video for Monitoring & Evaluation 15
The Most Significant Change 15
Participatory Video and the Most Significant Change 16
PVMSC Process: step-by-step 19
Additional effects of PVMSC 25
What’s in a story? 26
What’s in a video? 26
What’s in a participatory process? 27
Case Study: Tell it Again: cycles of reflection 29
Q&A of operational considerations 30

KEY STAGES IN PVMSC 32

Stage 1: Planning and Preparation 32
Stage 2: Collection, selection and videoing of stories 34
Case Study: Using Grounded Theory 34
Stage 3: Participatory editing 35
Stage 4: Screenings and selection of stories 36
Stage 5: Participatory analysis and video report 37
Stage 6: Dissemination 38
Case Study: From Messenger of War to Peace Messenger 38
Learning vs. communicating 41
Facilitation 42
Choosing an appropriate facilitator 43
A Local Evaluation Team 45
Case Study: Using a Local Evaluation Team 47

PART TWO: TOOLS 48

Facilitator Guidelines 48
Case Study: Peer-to-peer evaluation 49
Consider key things that can go WRONG: 52
Case Study: Telling sensitive stories 55

STORY CIRCLE 58
STORY SELECTION 62

How to select? 65
When selection is difficult 65
Case Study: Stories of violence 67
How to film safely? 68

PREPARING THE STORYTELLER 69
FILMING STORIES OF CHANGE 70
Case Study: The transformative effect 73
FILMING EXTRA FOOTAGE 74
INFORMED CONSENT 76
PARTICIPATORY EDITING 78
Dissemination 80 Case Study: For internal use only 80
SCREENING & SELECTION OF STORIES 82

How to divide your audience into groups? 83
Case Study: Targeted screening events 84

PARTICIPATORY ANALYSIS 86

Case Study: Unexpected results 88
What is Beneficiary Feedback? 90
Making a video report 90

VIDEO REPORT 91

Games & Exercises 92
PV Games for PVMSC 92
Selected PVMSC exercises 93
Selected Participatory Editing Exercises 95
Selected Screening Exercises 97
Selected Participatory Analysis Exercises 98
Selected Video Report Exercises 98
Energisers 99
Equipment List 101

GLOSSARY 102
RESOURCES 103

Key Reading 103
Key Watching 103
Resources for Facilitators 104
Theory and Other Examples of Participatory Practice 104

Qualitative Comparative Analysis: A Valuable Approach to Add to the Evaluator’s ‘Toolbox’? Lessons from Recent Applications

Schatz, F. and Welle, K. CDI Practice Paper 13, Publisher IDS
Available as pdf.

[From IDS website] “A heightened focus on demonstrating development results has increased the stakes for evaluating impact (Stern 2015), while the more complex objectives and designs of international aid programmes make it ever more challenging to attribute effects to a particular intervention (Befani, Barnett and Stern 2014).

Qualitative Comparative Analysis (QCA) is part of a new generation of approaches that go beyond the standard counterfactual logic in assessing causality and impact. Based on the lessons from three diverse applications of QCA, this CDI Practice Paper by Florian Schatz and Katharina Welle reflects on the potential of this approach for the impact evaluation toolbox.”

Rick Davies comment: QCA is one part of a wider family of methods that can be labelled as “configurational” See my video on “Evaluating ‘loose’ Theories of Change” for an outline of the other methods of analysis that fall into the same category. I think they are an important set of alternative methods for three reasons:

(a) they can be applied “after the fact”, if the relevant data is available. They do not require the careful setting up and monitoring that is characteristics of methods such as randomised control trials,

(b) they can use categorical (i.e. nominal) data, not just variable data.

(c) configurational methods are especially suitable for dealing with “complexity” because of the view of causality that is the basis of these configurational methods…it is one that has some correspondence with the complexity of the world we see around us. Configurational methods:

  • see causes as involving both single and multiple (i.e. conjunctural) causal conditions
  • see outcomes as potentially the result of more than one type of conjuncture (/configuration) of conditions  at work. This feature is also known as equifinality
  • see causes being of different types: Sufficient, Necessary, both and neither
  • see causes as being asymmetric: causes of an outcome not occurring may be different from simply the absence of the causes the outcome

 

 

 

IFAD Evaluation manual (2nd ed.)

“The [Dec 2015]  Evaluation Manual contains the core methodology that the Independent Office of Evaluation of IFAD (IOE) uses to conduct its evaluations. It has been developed based on the principles set out in the IFAD Evaluation Policy, building on international good evaluation standards and practice.

This second edition incorporates new international evaluative trends and draws from IOE’s experience in implementing the first edition. The Manual also takes into account IFAD’s new strategic priorities and operating model – which have clear implications for evaluation methods and processes – and adopts more rigorous methodological approaches, for example by promoting better impact assessment techniques and by designing and using theories of change.

The Evaluation Manual’s primary function is to is to ensure consistency, rigour and transparency across independent evaluations, and enhance IOE’s effectiveness and quality of work. It serves to guide staff and consultants engaged in evaluation work at IOE and it is a reference document for other IFAD staff and development partners, (such as project management staff and executing agencies of IFAD-supported operations), especially in recipient countries, on how evaluation of development programmes in the agriculture and rural development sector is conducted in IFAD.

The revision of this Manual was undertaken in recognition of the dynamic environment in which IFAD operates, and in response to the evolution in the approaches and methodologies of international development evaluation. It will help ensure that IFAD’s methodological practice remains at the cutting edge.

The Manual has been prepared through a process of engagement with multiple internal and external feedback opportunities from various stakeholders, including peer institutions (African Development Bank, Asian Development Bank, Food and Agriculture Organization of the United Nations, Institute of Development Studies [University of Sussex], Swiss Agency for Development and Cooperation and the World Bank). It was also reviewed by a high-level panel of experts.

Additionally, this second edition contains the core methodology for evaluations that were not contemplated in the first edition, such as corporate-level evaluations, impact evaluations and evaluation synthesis reports.

The manual is available in Arabic, English, French and Spanish to facilitate its use in all regions where IFAD has operations.”

A visual introduction to machine learning

A Visual Introduction to Machine Learning

This website explains very clearly, using good visualisations, how a Decision Tree algorithm can make useful predictions about how different attributes of a case, such as  project, relate to the presence or absence of an outcome of interest. Decision tree models are a good alternative to the use of QCA, in that the results are easily communicable and the learning curve is not so steep. See my blog “Rick on the Road” for  a number of posts I have made on the use of Decision Trees, for more information

%d bloggers like this: