Evaluating the impact of flexible development interventions

Posted on 31 March, 2016 – 4:48 AM

ODI Methods Lab  report. . March 2016 Rick Davies. Available as pdf

“Evaluating the impact of projects that aim to be flexible and responsive is a challenge. One of the criteria for good impact evaluation is rigour – which, broadly translated, means having a transparent, defensible and replicable process of data collection and analysis. And its debatable apotheosis is the use of randomised control trials (RCTs). Using RCTs requires careful management throughout the planning, implementation and evaluation cycle of a development intervention. However, these requirements for control are the antithesis of what is needed for responsive and adaptive programming. Less demanding and more common alternatives to RCTs are theory-led evaluations using mixed methods. But these can also be problematic because ideally a good theory contains testable hypotheses about what will happen, which are defined in advance.

Is there a middle way, between relying on pre-defined testable theories of change and abandoning any hope altogether that they can cope with the open-ended nature of development?

Drawing on experiences of the Australia-Mekong NGO Engagement Platform and borrowing from the data-centred approaches of the commercial sector, this paper argues that there is a useful role for ‘loose’ theories of change and that they can be evaluable”

Key messages:

• For some interventions, tight and testable theories of change are not appropriate – for example, in fast moving humanitarian emergencies or participatory development programmes, a more flexible approach is needed.

• However, it is still possible to have a flexible project design and to draw conclusions about causal attribution. This middle path involves ‘loose’ theories of change, where activities and outcomes may be known, but the likely causal links between them are not yet clear.

• In this approach, data is collected ‘after the event’ and analysed across and within cases, developing testable models for ‘what works’. More data will likely be needed than for projects with a ‘tight’ theory of change, as there is a wider range of relationships between interventions and outcomes to analyse. The theory of change still plays an important role, in guiding the selection of data types.

• While loose theories of change are useful to identify long term impacts, this approach can also support short cycle learning about the effectiveness of specific activities being implemented within a project’s lifespan

VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Learning about Analysing Networks to Support Development Work?

Posted on 1 March, 2016 – 11:27 PM

Simon Batchelor, IDS Practice Paper in Brief. July 2011. Available as pdf

“Introduction Everyone seems to be talking about networks. Networks and the analysis of networks is now big business. However, in the development sector, analysis of networks remains weak.

This paper presents four cases where social network analysis (SNA) was used in a development programme. It focuses not so much on the organisational qualities of networks nor on the virtual networks facilitated by software, but on the analysis of connectivity in real world networks. Most of the cases are unintentional networks. What literature there is on network analysis within the development sector tends to focus on intentional networks and their quality. Our experience suggests there is considerable benefit to examining and understanding the linkages in unintentional networks, and this is a key part of this Practice Paper.

The four cases illustrate how social network analysis can

• Identify investments in training, and enable effective targeting of capacity building.

• Analyse a policy environment for linkages between people, and enable targeted interventions.

• Analyse an emerging policy environment, and stimulate linkages between different converging sectors.

• Look back on and understand the flow of ideas, thereby learning about enabling an environment for innovation.

These cases, while not directly from the intermediary sector, potentially inform our work with the intermediary sector.


VN:F [1.9.22_1171]
Rating: +5 (from 5 votes)

Basic Field Guide to the Positive Deviance Approach

Posted on 9 February, 2016 – 3:53 AM

Tufts University, September 2010. 17 pages Available as pdf

“This basic guide is to orient newcomers to the PD approach and provide the essential tools to get started. It includes a brief description of basic definitions, as well as the guiding principles, steps, and process characteristics. This guide also includes suggestions of when to use the PD approach, facilitation tips, and outlines possible challenges. These elements will help practitioners implement successful PD projects. Please use this guide as a resource to initiate the PD approach. Its brevity and simplicity are meant to invite curious and intrepid implementers who face complex problems requiring behavioral and social change. It is suitable for those who seek solutions that exist today in their community and enables
the practitioner to leverage those solutions for the benefit of all members of the community. PD is best understood through action and is most effective through practice.”

Rick Davies comment: I would be interested to see if anyone has tried to combine MSC with Positive Deviance approaches. MSC can be seen as a scanning process whereas PD seems to involve more in-depth inquiry, and one can imagine that combining both could be especially fruitful.

PS1: Positive Deviants can be found within an existing data set by using predictive modeling to find attributes which are good predictors of the outcome(s) being absent, then examining the False Positives – which will be cases where the outcome occurred despite the contrary conditions.

PS2: Whenever you have a great new idea its always worth checking to see who else has already been there and one that :-) So, lo and behold, I have just found that others have already been exploring the overlap between prediction modeling (aka predictive analytics) and Positive Deviance. See: Big Data with a Personal Touch: The Convergence of Predictive Analytics and Positive Deviance

More generally, for more information about Positive Deviance as a method of inquiry see:

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Participatory Video and the Most Significant Change: a guide for facilitators

Posted on 9 February, 2016 – 3:34 AM

by Sara Asadullah & Soledad Muñiz, 2015 Available as pdf via this webpage

“The toolkit is designed to support you in planning and carrying out evaluation using PV with the MSC technique, or PVMSC for short. This is a participatory approach to monitoring, evaluation and learning that amplifies the voices of participants and helps organisations to better understand and improve their programmes”

Rick Davies comment:‘The advice on handling what can be quite emotional moments when people tell
stories that matter to them is well said, and is often not covered in text or training introductions to MSC. The advice on taking care with editing video records of MSC stories is also good, addressing an issue that has always niggled me.’



What is Participatory Monitoring and Evaluation? 14
What is Participatory Video? 14
Participatory Video for Monitoring & Evaluation 15
The Most Significant Change 15
Participatory Video and the Most Significant Change 16
PVMSC Process: step-by-step 19
Additional effects of PVMSC 25
What’s in a story? 26
What’s in a video? 26
What’s in a participatory process? 27
Case Study: Tell it Again: cycles of reflection 29
Q&A of operational considerations 30


Stage 1: Planning and Preparation 32
Stage 2: Collection, selection and videoing of stories 34
Case Study: Using Grounded Theory 34
Stage 3: Participatory editing 35
Stage 4: Screenings and selection of stories 36
Stage 5: Participatory analysis and video report 37
Stage 6: Dissemination 38
Case Study: From Messenger of War to Peace Messenger 38
Learning vs. communicating 41
Facilitation 42
Choosing an appropriate facilitator 43
A Local Evaluation Team 45
Case Study: Using a Local Evaluation Team 47


Facilitator Guidelines 48
Case Study: Peer-to-peer evaluation 49
Consider key things that can go WRONG: 52
Case Study: Telling sensitive stories 55


How to select? 65
When selection is difficult 65
Case Study: Stories of violence 67
How to film safely? 68

Case Study: The transformative effect 73
Dissemination 80 Case Study: For internal use only 80

How to divide your audience into groups? 83
Case Study: Targeted screening events 84


Case Study: Unexpected results 88
What is Beneficiary Feedback? 90
Making a video report 90


Games & Exercises 92
PV Games for PVMSC 92
Selected PVMSC exercises 93
Selected Participatory Editing Exercises 95
Selected Screening Exercises 97
Selected Participatory Analysis Exercises 98
Selected Video Report Exercises 98
Energisers 99
Equipment List 101


Key Reading 103
Key Watching 103
Resources for Facilitators 104
Theory and Other Examples of Participatory Practice 104

VN:F [1.9.22_1171]
Rating: +6 (from 6 votes)

Qualitative Comparative Analysis: A Valuable Approach to Add to the Evaluator’s ‘Toolbox’? Lessons from Recent Applications

Posted on 8 February, 2016 – 12:09 PM
Schatz, F. and Welle, K. CDI Practice Paper 13, Publisher IDS
Available as pdf.

[From IDS website] “A heightened focus on demonstrating development results has increased the stakes for evaluating impact (Stern 2015), while the more complex objectives and designs of international aid programmes make it ever more challenging to attribute effects to a particular intervention (Befani, Barnett and Stern 2014).

Qualitative Comparative Analysis (QCA) is part of a new generation of approaches that go beyond the standard counterfactual logic in assessing causality and impact. Based on the lessons from three diverse applications of QCA, this CDI Practice Paper by Florian Schatz and Katharina Welle reflects on the potential of this approach for the impact evaluation toolbox.”

Rick Davies comment: QCA is one part of a wider family of methods that can be labelled as “configurational” See my video on “Evaluating ‘loose’ Theories of Change” for an outline of the other methods of analysis that fall into the same category. I think they are an important set of alternative methods for three reasons:

(a) they can be applied “after the fact”, if the relevant data is available. They do not require the careful setting up and monitoring that is characteristics of methods such as randomised control trials,

(b) they can use categorical (i.e. nominal) data, not just variable data.

(c) configurational methods are especially suitable for dealing with “complexity” because of the view of causality that is the basis of these configurational methods…it is one that has some correspondence with the complexity of the world we see around us. Configurational methods:

  • see causes as involving both single and multiple (i.e. conjunctural) causal conditions
  • see outcomes as potentially the result of more than one type of conjuncture (/configuration) of conditions  at work. This feature is also known as equifinality
  • see causes being of different types: Sufficient, Necessary, both and neither
  • see causes as being asymmetric: causes of an outcome not occurring may be different from simply the absence of the causes the outcome




VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

IFAD Evaluation manual (2nd ed.)

Posted on 24 December, 2015 – 7:00 PM

“The [Dec 2015]  Evaluation Manual contains the core methodology that the Independent Office of Evaluation of IFAD (IOE) uses to conduct its evaluations. It has been developed based on the principles set out in the IFAD Evaluation Policy, building on international good evaluation standards and practice.

This second edition incorporates new international evaluative trends and draws from IOE’s experience in implementing the first edition. The Manual also takes into account IFAD’s new strategic priorities and operating model – which have clear implications for evaluation methods and processes – and adopts more rigorous methodological approaches, for example by promoting better impact assessment techniques and by designing and using theories of change.

The Evaluation Manual’s primary function is to is to ensure consistency, rigour and transparency across independent evaluations, and enhance IOE’s effectiveness and quality of work. It serves to guide staff and consultants engaged in evaluation work at IOE and it is a reference document for other IFAD staff and development partners, (such as project management staff and executing agencies of IFAD-supported operations), especially in recipient countries, on how evaluation of development programmes in the agriculture and rural development sector is conducted in IFAD.

The revision of this Manual was undertaken in recognition of the dynamic environment in which IFAD operates, and in response to the evolution in the approaches and methodologies of international development evaluation. It will help ensure that IFAD’s methodological practice remains at the cutting edge.

The Manual has been prepared through a process of engagement with multiple internal and external feedback opportunities from various stakeholders, including peer institutions (African Development Bank, Asian Development Bank, Food and Agriculture Organization of the United Nations, Institute of Development Studies [University of Sussex], Swiss Agency for Development and Cooperation and the World Bank). It was also reviewed by a high-level panel of experts.

Additionally, this second edition contains the core methodology for evaluations that were not contemplated in the first edition, such as corporate-level evaluations, impact evaluations and evaluation synthesis reports.

The manual is available in Arabic, English, French and Spanish to facilitate its use in all regions where IFAD has operations.”

VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

A visual introduction to machine learning

Posted on 24 December, 2015 – 6:38 PM

A Visual Introduction to Machine Learning

This website explains very clearly, using good visualisations, how a Decision Tree algorithm can make useful predictions about how different attributes of a case, such as  project, relate to the presence or absence of an outcome of interest. Decision tree models are a good alternative to the use of QCA, in that the results are easily communicable and the learning curve is not so steep. See my blog “Rick on the Road” for  a number of posts I have made on the use of Decision Trees, for more information

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Hivos ToC Guidelines: THEORY OF CHANGE THINKING IN PRACTICE – A stepwise approach

Posted on 4 December, 2015 – 10:35 AM

Marjan van Es (Hivos), Irene Guijt, Isabel Vogel, 2015.  Available as pdf
1 Introduction
1.1 Hivos and Theory of Change
1.2 Origin of the guidelines
1.3 Use of the guidelines
2 Theory of Change
2.1 What are Theories of Change? What is a ToC approach?
2.2 Why a Theory of Change approach?
2.3 Core components of a ToC process and product
2.4 Theories of Change at different levels
2.5 Using ToC thinking for different purposes
3 Key features of a ToC process
3.1 From complexity to focus and back
3.2 Making assumptions explicit
3.3 The importance of visualisation
4 Quality of ToC practice
4.1 Principles of ToC practice
4.2 Power at play
4.3 Gender (in)equality
5 Developing Theories of Change – eight steps Introduction
• Step 1 – Clarify the purpose of the ToC process
• Step 2 – Describe the desired change
• Step 3 – Analyse the current situation
• Step 4 – Identify domains of change
• Step 5 – Identify strategic priorities
• Step 6 – Map pathways of change
• Step 7 – Define monitoring, evaluation and learning priorities and process
• Step 8 – Use and adaptation of a ToC
6 ToC as a product
7 Quality Audit of a ToC process and product
8 Key tools, resources and materials
8.1 Tools referred to in these guidelines
• Rich Picture
• Four Dimensions of Change
• Celebrating success
• Stakeholder and Actor Analysis
• Power Analysis
• Gender Analysis
• Framings
• Behaviour change
• Ritual dissent
• Three Spheres: Control, Influence, Interest
• Necessary & Sufficient
• Indicator selection
• Visualisations of a ToC process and product
8.2 Other resources
8.3 Facilitation

Rick Davies comment: I have not had a chance to read the whole document, but I would suggest changes to the section on page 109 titled Sufficient and Necessary

A branch of a Theory of Change (in a tree shaped version) or a pathway (in a network version) can represent a sequence of events that is either:
  • Necessary and Sufficient to achieve the outcome. This is probably unlikely in most cases. If it was, there would be no need for any other branches/pathways
  • Necessary but Insufficient. In other words, events in the other branches were also necessary. In this case the ToC is quite demanding in its requirements before outcomes can be achieved. An evaluation would only have to find one of these branches not working to find the ToC not working
  • Sufficient but Unnecessary. In other words the outcome can be achieved via this branch or via the other branches. This is a less demanding ToC and more difficult to disprove. Each of the branches which was expected to be Sufficient would need to be tested

Because of these different interpretations and their consequences we should expect a ToC to state clearly the status of each branch in terms of its Necessity and/or Sufficiency

VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Evaluation in the Extreme: Research, Impact and Politics in Violently Divided Societies

Posted on 25 November, 2015 – 6:11 PM

Kenneth Bush – University of York Heslington, York, England
Colleen Duggan – International Development Research Centre, Ottawa
published October 2015, by Sage

“Over the past two decades, there has been an increase in the funding of research in and on violently divided societies. But how do we know whether research makes any difference to these societies—is the impact constructive or destructive? This book is the first to systematically explore this question through a series of case studies written by those on the front line of applied research. It offers clear and logical ways to understand the positive or negative role that research, or any other aid intervention, might have in developing societies affected by armed conflict, political unrest and/or social violence.”

Kenneth Bush is Altajir Lecturer and Executive Director of the Post-war Reconstruction and Development Unit, University of York (UK).  From 2016: School of Government & International Affairs, Durham University

Colleen Duggan is a Senior Programme Specialist in the Policy and Evaluation Division of the International Development Research Centre, Ottawa.

Download PDF:  


 Download EBook: http://www.idrc.ca/EN/Resources/Publications/openebooks/584-7/index.html

 Order Book:


VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

The sustainable development goals as a network of targets

Posted on 13 November, 2015 – 12:01 AM

DESA Working Paper No. 141, ST/ESA/2015/DWP/141, March 2015
Towards integration at last? The sustainable development goals as a network of targets
David Le Blanc, Department of Economic & Social Affairs

ABSTRACT “In 2014, UN Member States proposed a set of Sustainable Development Goals (SDGs), which will succeed the Millennium Development Goals (MDGs) as reference goals for the international development community for the period 2015-2030. The proposed goals and targets can be seen as a network, in which links among goals exist through targets that refer to multiple goals. Using network analysis techniques, we show that some thematic areas covered by the SDGs are well connected among one another. Other parts of the network have weaker connections with the rest of the system. The SDGs as a whole are a more integrated system than the MDGs were, which may facilitate policy integration across sectors. However, many of the links among goals that have been documented in biophysical, economic and social dimensions are not explicitly reflected in the SDGs. Beyond the added visibility that the SDGs provide to links among thematic areas, attempts at policy integration across various areas will have to be based on studies of the biophysical, social and economic systems.”

Rick Davies Comment: This is an example of something I would like to see many more examples of (what are in effect, almost): network Theories of Change, in place of overly simplified hierarchical models which typically have few if any feedback loops (aka cyclic graphs) Request: Could the author making the underlying data set publicly available, so other people can do their own network analyses? I know the data set could be reconstructed from existing sources on the SDGs, but…it could save a lot of unnecessary work. Also, the paper should provide a footnote explanation of the layout algorithm used to generate the network diagrams

Some simple improvements that could be made to the existing network diagrams:

  • Vary node size by their centrality (number of immediate connections they have with other nodes)
  • Represent Target nodes as squares and goal nodes as circles, not all as circles

What is now needed is a two mode network diagram showing what agencies (perhaps UN for a start) are prioritizing which SDGs.  This will help focus minds on where coordination needs are greatest, i.e. between which specific agencies re which specific goals. Here is an example of this kind of network diagram from Ghana, showing which different government agencies prioritised which Governance objectives in the Ghana Poverty Reduction Strategy, more than a decade ago. (Blue nodes – government agencies, red nodes = GPRS governance objectives, thicker lines = higher priority). The existence of SDG targets as well as goal could make an updated version of this kind of exercise even more useful.


Postscript: 16 April 2016. See also this online network diagram of the SDGs and targets, unfortunately lacking any text commentary. Basically eye candy only

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)