USAID: A GUIDE TO THE MODIFIED BASIC NECESSITIES SURVEY WHY AND HOW TO CONDUCT BNS IN CONSERVATION LANDSCAPES

Posted on 1 May, 2016 – 11:03 AM

Published June 2015. The principle authors of this guide are Dr. David Wilkie, Dr. Michelle Wieland and Diane Detoeuf of WCS. With thanks to Dr. Rick Davies for many useful discussions
and comments about adding the value of owned assets to the BNS (the modifcation). Available as pdf

“This manual is offered as a practical guide to implementing the Basic Necessities Survey (BNS) that was originally developed by Rick Davies (http://mande.co.uk/special-issues/the-basic-necessities-survey/), and was recently modified and then field tested by WCS. The modified Basic Necessities Survey is imperfect, in that it does not attempt to answer all questions that could be asked about the impact of conservation (or development) actions on people’s well-being. But it is the perfect core to a livelihoods monitoring program, because it provides essential information about people’s well-being from their perspective over time, and implementing a modified BNS is easy enough that it does not preclude gathering additional household information that a conservation project feels they need to adaptively manage their activities”

“This technical manual was developed to offer conservation practitioners with limited budgets and staff a simple, practical, low-cost, quantitative approach to measuring and tracking trends in people’s well-being, and to link these measures where possible to the use and conservation of natural resources.”

“This approach is not based on the assumption that people are doing well if they make more than 1-2 dollars per day, or are in poverty if they make less. Rather, it is based on the understanding that people themselves are best able to decide what constitutes well-being. The approach is based on a United Nations definition of poverty as a lack of basic necessities. More specifically the approach asks communities to define what goods and services are necessary for a family to meet their basic needs. Examples of goods include material items such as: an axe, mobile phone, bed, or cook-stove. Services can include: access to clean drinking within 15 minutes’ walk, reasonable walking distance to health care, children attending school,
women participating in community decision making, or absence of domestic violence, etc. Families who do not own or have access to this basket of goods and services are, by community definition, not meeting a basic, minimum standard of well-being and thus according to the community-defined are poor (i.e., living below the community defined poverty line).”

Rick Davies comment: It has been gratifying to see WCS pick up on the value of the BNS and make its potential more widely known via this USAID publication. I would like to highlight two other potentially useful modifications/uses of the BNS. One is how to establish a community-defined poverty line within the distribution of BNS scores collected in a given community, thus enabling a “head count” measure of poverty. This is described on pages 31-37 of this 2007 report for the Ford Foundation in Vietnam. The other is how to extract from BNS data a simple prediction rule that succinctly summarise what survey responses best predict the overall poverty status of a given household. That method is described in this June 2013 issue of the EES Connections newsletter (pages 12-14)

 

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Power calculation for causal inference in social science: Sample size and minimum detectable effect determination

Posted on 9 April, 2016 – 11:18 PM

Eric W Djimeu, Deo-Gracias Houndolo, 3ie Working Paper 26, March 2016. Available as pdf

Contents
1. Introduction
2. Basic statistics concepts: statistical logic
3. Power calculation: concept and applications
3.1. Parameters required to run power calculations
3.2. Statistical power and sample size determination
3.3. How to run power calculation: single treatment or multiple treatments?
4. Rules of thumb for power calculation
5. Common pitfalls in power calculation
6. Power calculations in the presence of multiple outcome variables
7. Experimental design
7.1. Individual-level randomisation
7.2. Cluster-level randomisation

1. Introduction

Since the 1990s, researchers have increasingly used experimental and quasi-experimental
primary studies – collectively known as impact evaluations – to measure the effects of
interventions, programmes and policies in low- and middle-income countries. However, we are
not always able to learn as much from these studies as we would like. One common problem is
when evaluation studies use sample sizes that are inappropriate for detecting whether
meaningful effects have occurred or not. To overcome this problem, it is necessary to conduct
power analysis during the study design phase to determine the sample size required to detect
the effects of interest. Two main concerns support the need to perform power calculations in
social science and international development impact evaluations: sample sizes can be too small
and sample sizes can be too large.

In the first case, power calculation helps to avoid the consequences of having a sample that is
too small to detect the smallest magnitude of interest in the outcome variable. Having a sample
size smaller than statistically required increases the likelihood of researchers concluding that
the evaluated intervention has no impact when the intervention does, indeed, cause a significant
change relative to a counterfactual scenario. Such a finding might wrongly lead policymakers to
cancel a development programme, or make counterproductive or even harmful changes in
public policies. Given this risk, it is not acceptable to conclude that an intervention has no
impact when the sample size used for the research is not sufficient to detect a meaningful
difference between the treatment group and the control group.

In the second case, evaluation researchers must be good stewards of resources. Data
collection is expensive and any extra unit of observation comes at a cost. Therefore, for costefficiency and value-for-money it is important to ensure that an evaluation research design does
not use a larger sample size than is required to detect the minimum detectable effect (MDE)
of interest. Researchers and funders should therefore use power calculations to determine the
appropriate budget for an impact evaluation study.

Sample size determination and power calculation can be challenging, even for researchers
aware of the problems of small sample sizes and insufficient power. 3ie developed this resource
to help researchers with their search for the optimal sample size required to detect an MDE in
the interventions they evaluate.

The manual provides straightforward guidance and explains the process of performing power
calculations in different situations. To do so, it draws extensively on existing materials to
calculate statistical power for individual and cluster randomised controlled trials. More
specifically, this manual relies on Hayes and Bennett (1999) for cluster randomised controlled
trials and documentation from Optimal Design software version 3.0 for individual randomised
controlled trials.

VN:F [1.9.22_1171]
Rating: +1 (from 3 votes)

Evaluating the impact of flexible development interventions

Posted on 31 March, 2016 – 4:48 AM

ODI Methods Lab  report. . March 2016 Rick Davies. Available as pdf

“Evaluating the impact of projects that aim to be flexible and responsive is a challenge. One of the criteria for good impact evaluation is rigour – which, broadly translated, means having a transparent, defensible and replicable process of data collection and analysis. And its debatable apotheosis is the use of randomised control trials (RCTs). Using RCTs requires careful management throughout the planning, implementation and evaluation cycle of a development intervention. However, these requirements for control are the antithesis of what is needed for responsive and adaptive programming. Less demanding and more common alternatives to RCTs are theory-led evaluations using mixed methods. But these can also be problematic because ideally a good theory contains testable hypotheses about what will happen, which are defined in advance.

Is there a middle way, between relying on pre-defined testable theories of change and abandoning any hope altogether that they can cope with the open-ended nature of development?

Drawing on experiences of the Australia-Mekong NGO Engagement Platform and borrowing from the data-centred approaches of the commercial sector, this paper argues that there is a useful role for ‘loose’ theories of change and that they can be evaluable”

Key messages:

• For some interventions, tight and testable theories of change are not appropriate – for example, in fast moving humanitarian emergencies or participatory development programmes, a more flexible approach is needed.

• However, it is still possible to have a flexible project design and to draw conclusions about causal attribution. This middle path involves ‘loose’ theories of change, where activities and outcomes may be known, but the likely causal links between them are not yet clear.

• In this approach, data is collected ‘after the event’ and analysed across and within cases, developing testable models for ‘what works’. More data will likely be needed than for projects with a ‘tight’ theory of change, as there is a wider range of relationships between interventions and outcomes to analyse. The theory of change still plays an important role, in guiding the selection of data types.

• While loose theories of change are useful to identify long term impacts, this approach can also support short cycle learning about the effectiveness of specific activities being implemented within a project’s lifespan

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Learning about Analysing Networks to Support Development Work?

Posted on 1 March, 2016 – 11:27 PM

Simon Batchelor, IDS Practice Paper in Brief. July 2011. Available as pdf

“Introduction Everyone seems to be talking about networks. Networks and the analysis of networks is now big business. However, in the development sector, analysis of networks remains weak.

This paper presents four cases where social network analysis (SNA) was used in a development programme. It focuses not so much on the organisational qualities of networks nor on the virtual networks facilitated by software, but on the analysis of connectivity in real world networks. Most of the cases are unintentional networks. What literature there is on network analysis within the development sector tends to focus on intentional networks and their quality. Our experience suggests there is considerable benefit to examining and understanding the linkages in unintentional networks, and this is a key part of this Practice Paper.

The four cases illustrate how social network analysis can

• Identify investments in training, and enable effective targeting of capacity building.

• Analyse a policy environment for linkages between people, and enable targeted interventions.

• Analyse an emerging policy environment, and stimulate linkages between different converging sectors.

• Look back on and understand the flow of ideas, thereby learning about enabling an environment for innovation.

These cases, while not directly from the intermediary sector, potentially inform our work with the intermediary sector.

 

VN:F [1.9.22_1171]
Rating: +2 (from 2 votes)

Basic Field Guide to the Positive Deviance Approach

Posted on 9 February, 2016 – 3:53 AM

Tufts University, September 2010. 17 pages Available as pdf

“This basic guide is to orient newcomers to the PD approach and provide the essential tools to get started. It includes a brief description of basic definitions, as well as the guiding principles, steps, and process characteristics. This guide also includes suggestions of when to use the PD approach, facilitation tips, and outlines possible challenges. These elements will help practitioners implement successful PD projects. Please use this guide as a resource to initiate the PD approach. Its brevity and simplicity are meant to invite curious and intrepid implementers who face complex problems requiring behavioral and social change. It is suitable for those who seek solutions that exist today in their community and enables
the practitioner to leverage those solutions for the benefit of all members of the community. PD is best understood through action and is most effective through practice.”

Rick Davies comment: I would be interested to see if anyone has tried to combine MSC with Positive Deviance approaches. MSC can be seen as a scanning process whereas PD seems to involve more in-depth inquiry, and one can imagine that combining both could be especially fruitful.

PS1: Positive Deviants can be found within an existing data set by using predictive modeling to find attributes which are good predictors of the outcome(s) being absent, then examining the False Positives – which will be cases where the outcome occurred despite the contrary conditions.

PS2: Whenever you have a great new idea its always worth checking to see who else has already been there and one that :-) So, lo and behold, I have just found that others have already been exploring the overlap between prediction modeling (aka predictive analytics) and Positive Deviance. See: Big Data with a Personal Touch: The Convergence of Predictive Analytics and Positive Deviance

More generally, for more information about Positive Deviance as a method of inquiry see:

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Participatory Video and the Most Significant Change: a guide for facilitators

Posted on 9 February, 2016 – 3:34 AM

by Sara Asadullah & Soledad Muñiz, 2015 Available as pdf via this webpage

“The toolkit is designed to support you in planning and carrying out evaluation using PV with the MSC technique, or PVMSC for short. This is a participatory approach to monitoring, evaluation and learning that amplifies the voices of participants and helps organisations to better understand and improve their programmes”

Rick Davies comment:‘The advice on handling what can be quite emotional moments when people tell
stories that matter to them is well said, and is often not covered in text or training introductions to MSC. The advice on taking care with editing video records of MSC stories is also good, addressing an issue that has always niggled me.’

Contents

INTRODUCTION 8
PREFACE 10
GUIDE TO USING THE TOOLKIT 12
PART ONE: 14

What is Participatory Monitoring and Evaluation? 14
What is Participatory Video? 14
Participatory Video for Monitoring & Evaluation 15
The Most Significant Change 15
Participatory Video and the Most Significant Change 16
PVMSC Process: step-by-step 19
Additional effects of PVMSC 25
What’s in a story? 26
What’s in a video? 26
What’s in a participatory process? 27
Case Study: Tell it Again: cycles of reflection 29
Q&A of operational considerations 30

KEY STAGES IN PVMSC 32

Stage 1: Planning and Preparation 32
Stage 2: Collection, selection and videoing of stories 34
Case Study: Using Grounded Theory 34
Stage 3: Participatory editing 35
Stage 4: Screenings and selection of stories 36
Stage 5: Participatory analysis and video report 37
Stage 6: Dissemination 38
Case Study: From Messenger of War to Peace Messenger 38
Learning vs. communicating 41
Facilitation 42
Choosing an appropriate facilitator 43
A Local Evaluation Team 45
Case Study: Using a Local Evaluation Team 47

PART TWO: TOOLS 48

Facilitator Guidelines 48
Case Study: Peer-to-peer evaluation 49
Consider key things that can go WRONG: 52
Case Study: Telling sensitive stories 55

STORY CIRCLE 58
STORY SELECTION 62

How to select? 65
When selection is difficult 65
Case Study: Stories of violence 67
How to film safely? 68

PREPARING THE STORYTELLER 69
FILMING STORIES OF CHANGE 70
Case Study: The transformative effect 73
FILMING EXTRA FOOTAGE 74
INFORMED CONSENT 76
PARTICIPATORY EDITING 78
Dissemination 80 Case Study: For internal use only 80
SCREENING & SELECTION OF STORIES 82

How to divide your audience into groups? 83
Case Study: Targeted screening events 84

PARTICIPATORY ANALYSIS 86

Case Study: Unexpected results 88
What is Beneficiary Feedback? 90
Making a video report 90

VIDEO REPORT 91

Games & Exercises 92
PV Games for PVMSC 92
Selected PVMSC exercises 93
Selected Participatory Editing Exercises 95
Selected Screening Exercises 97
Selected Participatory Analysis Exercises 98
Selected Video Report Exercises 98
Energisers 99
Equipment List 101

GLOSSARY 102
RESOURCES 103

Key Reading 103
Key Watching 103
Resources for Facilitators 104
Theory and Other Examples of Participatory Practice 104

VN:F [1.9.22_1171]
Rating: +6 (from 6 votes)

Qualitative Comparative Analysis: A Valuable Approach to Add to the Evaluator’s ‘Toolbox’? Lessons from Recent Applications

Posted on 8 February, 2016 – 12:09 PM
Schatz, F. and Welle, K. CDI Practice Paper 13, Publisher IDS
Available as pdf.

[From IDS website] “A heightened focus on demonstrating development results has increased the stakes for evaluating impact (Stern 2015), while the more complex objectives and designs of international aid programmes make it ever more challenging to attribute effects to a particular intervention (Befani, Barnett and Stern 2014).

Qualitative Comparative Analysis (QCA) is part of a new generation of approaches that go beyond the standard counterfactual logic in assessing causality and impact. Based on the lessons from three diverse applications of QCA, this CDI Practice Paper by Florian Schatz and Katharina Welle reflects on the potential of this approach for the impact evaluation toolbox.”

Rick Davies comment: QCA is one part of a wider family of methods that can be labelled as “configurational” See my video on “Evaluating ‘loose’ Theories of Change” for an outline of the other methods of analysis that fall into the same category. I think they are an important set of alternative methods for three reasons:

(a) they can be applied “after the fact”, if the relevant data is available. They do not require the careful setting up and monitoring that is characteristics of methods such as randomised control trials,

(b) they can use categorical (i.e. nominal) data, not just variable data.

(c) configurational methods are especially suitable for dealing with “complexity” because of the view of causality that is the basis of these configurational methods…it is one that has some correspondence with the complexity of the world we see around us. Configurational methods:

  • see causes as involving both single and multiple (i.e. conjunctural) causal conditions
  • see outcomes as potentially the result of more than one type of conjuncture (/configuration) of conditions  at work. This feature is also known as equifinality
  • see causes being of different types: Sufficient, Necessary, both and neither
  • see causes as being asymmetric: causes of an outcome not occurring may be different from simply the absence of the causes the outcome

 

 

 

VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

IFAD Evaluation manual (2nd ed.)

Posted on 24 December, 2015 – 7:00 PM

“The [Dec 2015]  Evaluation Manual contains the core methodology that the Independent Office of Evaluation of IFAD (IOE) uses to conduct its evaluations. It has been developed based on the principles set out in the IFAD Evaluation Policy, building on international good evaluation standards and practice.

This second edition incorporates new international evaluative trends and draws from IOE’s experience in implementing the first edition. The Manual also takes into account IFAD’s new strategic priorities and operating model – which have clear implications for evaluation methods and processes – and adopts more rigorous methodological approaches, for example by promoting better impact assessment techniques and by designing and using theories of change.

The Evaluation Manual’s primary function is to is to ensure consistency, rigour and transparency across independent evaluations, and enhance IOE’s effectiveness and quality of work. It serves to guide staff and consultants engaged in evaluation work at IOE and it is a reference document for other IFAD staff and development partners, (such as project management staff and executing agencies of IFAD-supported operations), especially in recipient countries, on how evaluation of development programmes in the agriculture and rural development sector is conducted in IFAD.

The revision of this Manual was undertaken in recognition of the dynamic environment in which IFAD operates, and in response to the evolution in the approaches and methodologies of international development evaluation. It will help ensure that IFAD’s methodological practice remains at the cutting edge.

The Manual has been prepared through a process of engagement with multiple internal and external feedback opportunities from various stakeholders, including peer institutions (African Development Bank, Asian Development Bank, Food and Agriculture Organization of the United Nations, Institute of Development Studies [University of Sussex], Swiss Agency for Development and Cooperation and the World Bank). It was also reviewed by a high-level panel of experts.

Additionally, this second edition contains the core methodology for evaluations that were not contemplated in the first edition, such as corporate-level evaluations, impact evaluations and evaluation synthesis reports.

The manual is available in Arabic, English, French and Spanish to facilitate its use in all regions where IFAD has operations.”

VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

A visual introduction to machine learning

Posted on 24 December, 2015 – 6:38 PM

A Visual Introduction to Machine Learning

This website explains very clearly, using good visualisations, how a Decision Tree algorithm can make useful predictions about how different attributes of a case, such as  project, relate to the presence or absence of an outcome of interest. Decision tree models are a good alternative to the use of QCA, in that the results are easily communicable and the learning curve is not so steep. See my blog “Rick on the Road” for  a number of posts I have made on the use of Decision Trees, for more information

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Hivos ToC Guidelines: THEORY OF CHANGE THINKING IN PRACTICE – A stepwise approach

Posted on 4 December, 2015 – 10:35 AM


Marjan van Es (Hivos), Irene Guijt, Isabel Vogel, 2015.  Available as pdf
PART A – CONCEPTS AND DEFINITION
1 Introduction
1.1 Hivos and Theory of Change
1.2 Origin of the guidelines
1.3 Use of the guidelines
2 Theory of Change
2.1 What are Theories of Change? What is a ToC approach?
2.2 Why a Theory of Change approach?
2.3 Core components of a ToC process and product
2.4 Theories of Change at different levels
2.5 Using ToC thinking for different purposes
3 Key features of a ToC process
3.1 From complexity to focus and back
3.2 Making assumptions explicit
3.3 The importance of visualisation
4 Quality of ToC practice
4.1 Principles of ToC practice
4.2 Power at play
4.3 Gender (in)equality
PART B – A STEPWISE APPROACH
5 Developing Theories of Change – eight steps Introduction
• Step 1 – Clarify the purpose of the ToC process
• Step 2 – Describe the desired change
• Step 3 – Analyse the current situation
• Step 4 – Identify domains of change
• Step 5 – Identify strategic priorities
• Step 6 – Map pathways of change
• Step 7 – Define monitoring, evaluation and learning priorities and process
• Step 8 – Use and adaptation of a ToC
6 ToC as a product
7 Quality Audit of a ToC process and product
PART C – RESOURCES AND TOOLS
8 Key tools, resources and materials
8.1 Tools referred to in these guidelines
• Rich Picture
• Four Dimensions of Change
• Celebrating success
• Stakeholder and Actor Analysis
• Power Analysis
• Gender Analysis
• Framings
• Behaviour change
• Ritual dissent
• Three Spheres: Control, Influence, Interest
• Necessary & Sufficient
• Indicator selection
• Visualisations of a ToC process and product
8.2 Other resources
8.3 Facilitation

Rick Davies comment: I have not had a chance to read the whole document, but I would suggest changes to the section on page 109 titled Sufficient and Necessary

A branch of a Theory of Change (in a tree shaped version) or a pathway (in a network version) can represent a sequence of events that is either:
  • Necessary and Sufficient to achieve the outcome. This is probably unlikely in most cases. If it was, there would be no need for any other branches/pathways
  • Necessary but Insufficient. In other words, events in the other branches were also necessary. In this case the ToC is quite demanding in its requirements before outcomes can be achieved. An evaluation would only have to find one of these branches not working to find the ToC not working
  • Sufficient but Unnecessary. In other words the outcome can be achieved via this branch or via the other branches. This is a less demanding ToC and more difficult to disprove. Each of the branches which was expected to be Sufficient would need to be tested

Because of these different interpretations and their consequences we should expect a ToC to state clearly the status of each branch in terms of its Necessity and/or Sufficiency

VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)