CDI conference proceedings: Improving the use of M&E processes and findings

Posted on 17 July, 2014 – 12:12 PM

“On the 20th and 21st of March 2014 CDI organized her annual ‘M&E on the cutting edge’ conference on the topic: ‘Improving the Use of M&E Processes and Findings’.

This conference is part of our series of yearly ‘M&E on the cutting edge’ events. The conference was held on the 20th and 21st of March 2014 in Wageningen, the Netherlands. This conference particularly looked at under what conditions the use of M&E processes and findings can be improved. The conference report can now be accessed here in pdf format

Conference participants had the opportunity to learn about:

  • frameworks to understand utilisation of monitoring and evaluation findings and process;
  • different types of utilisation of monitoring and evaluation process and findings, when and for whom these are relevant;
  • conditions that improve utilisation of monitoring and evaluation processes and findings.

Conference presentations can be found online here:

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Gender, Monitoring, Evaluation and Learning – 9 new articles in pdfs

Posted on 10 July, 2014 – 10:27 AM
…in Gender & Development Volume 22, Issue 2, July 2014   Gender, Monitoring, Evaluation and Learning
“In this issue of G&D, we examine the topic of Gender, Monitoring, Evaluation and Learning (MEL) from a gender equality and women’s rights perspective, and hope to prove that a good MEL system is an activist’s best friend! This unique collection of articles captures the knowledge of a range of development practitioners and women’s rights activists, who write about a variety of organisational approaches to MEL. Contributors come from both the global South and the global North and have tried to share their experience accessibly, making what is often very complex and technical material as clear as possible to non-MEL specialists.”


The links below will take you to the article abstract on the Oxfam Policy & Practice website, from where you can download the article for free.


Introduction to Gender, Monitoring, Evaluation and Learning
Kimberly Bowman and Caroline Sweetman


Women’s Empowerment Impact Measurement Initiative
Nidal Karim, Mary Picard, Sarah Gillingham and Leah Berkowitz
A review of approaches and methods to measure economic empowerment of women
and girls
Paola Pereznieto and Georgia Taylor

Helen Lindley

Capturing changes in women’s lives: the experiences of Oxfam Canada in applying
feminist evaluation principles to monitoring and evaluation practice
Carol Miller and Laura Haylock
A survivor behind every number: using programme data on violence against women
and girls in the Democratic Republic of Congo to influence policy and practice
Marie-France Guimond and Katie Robinette

Learning about women’s empowerment in the context of development projects: do the figures tell us enough?

Jane Carter, Sarah Byrne, Kai Schrader, Humayun Kabir, Zenebe Bashaw
Uraguchi, Bhanu Pandit, Badri Manandhar, Merita Barileva, Norbert Pijls & Pascal Fendrich


Compiled by Liz Cooke

Resources List – Gender, Monitoring, Evaluation and Learning


VN:F [1.9.22_1171]
Rating: +2 (from 2 votes)

Review of evaluation approaches and methods for interventions related to violence against women and girls (VAWG)

Posted on 9 July, 2014 – 3:57 PM

[From the R4D website] Available as pdf

Raab, M.; Stuppert, W. Review of evaluation approaches and methods for interventions related to violence against women and girls (VAWG). (2014) 123 pp.


The purpose of this review is to generate a robust understanding of the strengths, weaknesses and appropriateness of evaluation approaches and methods in the field of development and humanitarian interventions on violence against women and girls (VAWG). It was commissioned by the Evaluation Department of the UK Department for International Development (DFID), with the goal of engaging policy makers, programme staff, evaluators, evaluation commissioners and other evaluation users in reflecting on ways to improve evaluations of VAWG programming. Better evaluations are expected to contribute to more successful programme design and implementation.

The review examines evaluations of interventions to prevent or reduce violence against women and girls within the contexts of development and humanitarian aid.

Rick Davies comment: This paper is of interest for two reasons: (a) The review process was the subject of a blog that documented its progress, from beginning to end. A limited number of comments were posted on the blog by interested observers (including myself) and these were responded to by the reviewers; (b) The review used Qualitative Comparative Analysis (QCA) as its means of understanding the relationship between attributes of evaluations in this area and their results. QCA is an interesting but demanding method even when applied on a modest scale.

I will post more comments here after taking the opportunity to read the review with some care.

The authors have also invited comments from anyone else who is interested, via their email address available in their report

Postscript 2014 10 08: At the EES 2014 at Dublin I made a presentation of the Triangulation of QCA findings, which included some of their data and analysis. You can see the presentation here on Youtube (it has attached audio). Michaela and Wolfgang have subsequently commented on that presentation and in turn I have responded to their comments


VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Incorporating people’s values in development: weighting alternatives

Posted on 7 July, 2014 – 10:44 PM

Laura Rodriguez Takeuchi, ODI Project Note 04, June 2014. Available as pdf

“Key messages:

  • In the measurement of multidimensional well-being, weights aim to capture the relative importance of each componentto a person’s overall  well-being. The choice of weights needsto be explicit and could be used to incorporate people’sperspectives into a final metric.
  • Stated preferences approaches aim to obtain weights from individuals’ responses to hypothetical scenarios. We outline six of these approaches. Understanding their design and limitations is vital to make sense of potentially dissimilar result.
  • It is important to select and test an appropriate method for specific contexts, considering the challenges of relying on people’s answers. Two methodologies, DCE and PTO, are put forward for testing in a pilot project.”

See also:Laura Rodriguez Takeuchi blog posting on Better Evaluation: Week 26: Weighing people’s values in evaluation

Rick Davies comment: Although this was a very interesting and useful paper overall, I was fascinated by this part of Laura’s paper ”

Reflecting on the psychology literature, Kahneman and Krueger (2006) argue that it is difficult to deduce preferences from people’s actual choices because of limited rationality:

“[People] make inconsistent choices, fail to learn from experience, exhibit reluctance to trade, base their own satisfaction on how their situation compares with the satisfaction of others and depart from the standard  model of the rational economic agent in other ways.”   (Kahneman and Krueger 2006: 3)

Rather than using these ‘real’ choices, stated preferences approaches rely on surveys to obtain weights from individuals’ responses to hypothetical scenarios.

This seems totally bizarre. What would happen if we insisted on all respondents’ survey responses being rational, and applied various other remedial measures to make them so!  Would we end up with a perfectely rational set of responses that have no actual fit with how people or behave in the world? How useful would that be? Perhaps this is what happens when you spend too much time in the company of economists? :-))

On another matter…. Table 1 usefully lists eight different weighting methods, which are explained in the text. However this list does not include one of the simplest methods that exists, and which is touched upon tangentially in the reference to the South African study on social perceptions of material needs (Wright, 2008). This is the use of weighted checklists, where respondents choose both items on a checklist and the weights to be given to each item, in a series of binary (yes/no) choices. This method was used in a series of household poverty surveys in Vietnam in 1997 and 2006 using an instrument called a Basic Necessities Survey. The wider potential uses of this participatory and democratic method are are discussed in a related blog on weighted checklists.

Postscript: Julie Newton has point out this useful related website:

  • Measuring National Well-being.  “ONS [UK Office for National Statistics] is developing new measures of national well-being. The aim is to provide a fuller picture of how society is doing by supplementing existing economic, social and environmental measures. Developing better measures of well-being is a long term programme. ONS  are committed to sharing ideas and proposals widely to ensure that the measures are relevant and founded on what matters to people.” Their home page lists a number of new publications on this subject
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Composite measures of local disaster impact – Lessons from Typhoon Yolanda, Philippines

Posted on 2 June, 2014 – 3:03 PM

by Aldo Benini, Patrice Chataigner, 2014. Available as pdf

Purpose:”When disaster strikes, determining affected areas and populations with the greatest unmet  needs is a key objective of rapid assessments. This note is concerned with the logic and scope for improvement in a particular tool, the so-called “prioritization matrix”, that has increasingly been employed in such assessments. We compare, and expand on, some variants that sprang up in the same environment of a large natural disaster. The fixed context lets us attribute differences to the creativity of the users grappling with the intrinsic nature of the tool, rather than to fleeting local circumstances. Our recommendations may thus be translated more easily to future assessments elsewhere.

The typhoon that struck the central Philippines in November 2013 – known as “Typhoon Yolanda” and also as “Typhoon Haiyan” – triggered a significant national and international relief response. Its information managers imported the practice, tried and tested in other disasters, of ranking affected communities by the degree of impact and need. Several lists, known as prioritization matrices, of ranked municipalities were produced in the first weeks of the response. Four of them, by different individuals and organizations, were shared with us. The largest in coverage ranked 497 municipalities.

The matrices are based on indicators, which they aggregate into an index that determines the ranks. Thus they come under the rubric of composite measures. They are managed in spreadsheets. We review the four for their particular emphases, the mechanics of combining indicators, and the statistical distributions of the final impact scores. Two major questions concern the use of rankings (as opposed to other transformations) and the condensation of all indicators in one combined index. We propose alternative formulations, in part borrowing from recent advances in social indicator research. We make recommendations on how to improve the process in future rapid assessments.”

Rick Davies comment: Well worth reading!

VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

How Shortcuts Cut Us Short: Cognitive Traps in Philanthropic Decision Making

Posted on 30 May, 2014 – 11:48 AM

Beer, Tanya, and Julia Coffman. 2014. “How Shortcuts Cut Us Short: Cognitive Traps in Philanthropic Decision Making”. Centre for Evaluation Innovation. Available as pdf

Found courtesy of “people-centered development” blog (michaela raab)

Introduction: “Anyone who tracks the popular business literature has come across at least one article or book, if not a half dozen, that applies the insights of cognitive science and behavioral economics to individual and organizational decision making.   These authors apply social science research to the question of why so many strategic decisions yield disappointing results, despite extensive research and planning and the availability of data about how strategies are (or are not) performing.  The diagnosis is that many of our decisions rely on mental shortcuts or “cognitive traps,” which can lead us to make uninformed or even bad decisions.   Shortcuts provide time-pressured staff with simple ways of making decisions and managing complex strategies that play  out an uncertain world. These shortcuts affect how we access information, what information  we pay attention to, what we learn, and whether and how we apply what we learn. Like all  organizations, foundations and the people who work in them are subject to these same traps.  Many foundations are attempting to make better decisions by investing in evaluation and other data collection efforts that support their strategic learning. The desire is to generate more timely and actionable data, and some foundations have even created staff positions dedicated entirely to supporting learning and the ongoing application of data for purposes of continuous improvement.  While this is a useful and positive trend, decades of research have shown that despite the best of intentions, and even when actionable data is presented at the right time, people do not automatically make good and rational decisions. Instead, we are hard-wired to fall into cognitive traps  that affect how we process (or ignore) information that could help us to make better judgments.”

Rick Davies comment: Recommended, along with the videosong by Mr Wray on cognitive bias, also available via Michaela’s blog

VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Making impact evaluation matter: Better evidence for effective policies and programmes

Posted on 27 May, 2014 – 9:16 PM

Asian Development Bank, Manila, 1-5 September 2014

The Asian Development Bank (ADB) and the International Initiative for Impact Evaluation (3ie) are hosting a major international impact evaluation conference Making Impact Evaluation Matter from 1-5 September 2014 in Manila. The call for proposals to present papers and conduct workshops at the conference is now open.

Making Impact Evaluation Matter will comprise pre-conference workshops for 2.5 days from 1-3 September 2014, and 2.5 days of the conference from 3-5 September. Major international figures in the field of impact evaluation are being invited to speak at the plenary sessions of the conference. There will be five to six streams of pre-conference workshops and up to eight streams of parallel sessions during the conference, allowing for over 150 presentations.

Proposals are now being invited for presentations on any aspect of impact evaluations and systematic reviews, including findings, methods and translation of evidence into policy. Researchers are welcome to submit proposals on the design (particularly innovative designs for difficult to evaluate interventions), implementation, findings and use of impact evaluations and systematic reviews. Policymakers and development programme managers are welcome to submit proposals on the use of impact evaluation and systematic review findings.

Parallel sessions at the conference will be organised around the following themes/sectors: (a) infrastructure (transport, energy, information and communication technology, urban development, and water), (b) climate change/ environment/ natural resources, (c) social development (health, education, gender equity, poverty and any other aspect of social development),  (d) rural development (agriculture,  food security and any other aspect of rural development),  (e)  financial inclusion, (f) institutionalisation of impact evaluation, and incorporating impact evaluation or systematic reviews into institutional appraisal and results frameworks, (g) impact evaluation of institutional and policy reform (including public management and governance), (h) impact evaluation methods, and (g) promotion of the use of evidence.

Workshop proposals are being invited on all aspects of designing, conducting and disseminating findings from impact evaluations and systematic reviews. The workshops can be at an introductory, intermediate or advanced level.  The duration of a workshop can vary from half a day to two full days.

All proposals must be submitted via email to : with email subject line ‘Proposal: presentation’ or ‘Proposal: workshop’. The proposal submission deadline is 3 July 2014.

Bursaries are available for participants from low- and middle-income countries. Employees of international organisations are however not eligible for bursaries (except the Asian Development Bank). A bursary will cover return economy airfare and hotel accommodation. All other expenses (ground transport, visa, meals outside the event) must be paid by the participant or their employer. Bursary applications must be made through the conference website: The deadline for bursary applications is 15 July 2014.

Non-sponsored participants are required to pay a fee of US$250 for participating in the conference or US$450 for participating in the pre-conference workshops as well as the conference. Those accepted to present a workshop will be exempted from the fee.

For more information on the submission of proposals for the conference, read the Call for Proposals.

For the latest updates on Making Impact Evaluation Matter, visit

Queries may be sent to
Copyright © 2014 International Initiative for Impact Evaluation (3ie), All rights reserved.
You are receiving this email because you have subscribed to the 3ie mailing list.

Our mailing address is:
International Initiative for Impact Evaluation (3ie)

2nd Floor, East Wing, ISID Complex,
Plot No. 4, Vasant Kunj Institutional Area

New Delhi 110070

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

International Energy Policies & Programmes Evaluation Conference (IEPPEC) conference 9-11 September 2014

Posted on 27 May, 2014 – 9:09 PM

– the leading event for energy policy and programme evaluators

Sharing and Accelerating the Value and Use of Monitoring, Reporting and Verification Practices.

There are a wide range of regional, national and international policies and programmes designed to achieve improved energy efficiency, and therefore reductions in GHG emissions and reductions in living costs. These are top priorities for bodies such as the EU, IEA and UN in addressing the critical issues of climate change, resource conservation and living standards.

The increasing focus on this policy area has resulted in more challenging objectives and intended outcomes for interventions, along with growing investment. But are we investing correctly?

Pioneering approaches to evaluating investments and policy decisions related to energy efficiency will be at the forefront of presentations and debate at the IEPPEC, held in Berlin between the 9th and 11th of September 2014.

The conference presents an unparalleled opportunity to bring together policy and evaluation practitioners, academics and others from around the world involved in evaluation of energy and low carbon policies and programs. Attendees will be able to debate the most effective means of assuring that both commercial and community-based approaches to improving the sustainability of our energy use and making our economies more efficient are based on common metrics that can be compared across regions and regulatory jurisdictions. The focus over the three day conference is for policy makers, program managers and evaluators to share ideas for improving the assessment of potential and actual impacts of low carbon policies and programmes, and to facilitate a deeper understanding of evaluation methods that work in practice.

The conference features:

•          Presentation of over 85 full and peer-reviewed evaluation papers by their authors

•          Four panel discussions

•          Two keynote sessions

•          A two-day poster exhibit

·               Lots of opportunity to share learning and network with other attendees

The conference is filling up fast, so to avoid disappointment, please book your place now by visiting

Additional information:

-       For the draft conference agenda, please click here

-       Refreshments, breakfasts and lunches are provided.

-       For any further information, please visit

VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Running Randomized Evaluations: A Practical Guide

Posted on 22 May, 2014 – 10:01 AM
Glennerster, Rachel, and Kudzai Takavarasha. Running Randomized Evaluations: A Practical Guide. Princeton: Princeton University Press, 2013.


This book provides a comprehensive yet accessible guide to running randomized impact evaluations of social programs. Drawing on the experience of researchers at the Abdul Latif Jameel Poverty Action Lab, which has run hundreds of such evaluations in dozens of countries throughout the world, it offers practical insights on how to use this powerful technique, especially in resource-poor environments.

This step-by-step guide explains why and when randomized evaluations are useful, in what situations they should be used, and how to prioritize different evaluation opportunities. It shows how to design and analyze studies that answer important questions while respecting the constraints of those working on and benefiting from the program being evaluated. The book gives concrete tips on issues such as improving the quality of a study despite tight budget constraints, and demonstrates how the results of randomized impact evaluations can inform policy.

With its self-contained modules, this one-of-a-kind guide is easy to navigate. It also includes invaluable references and a checklist of the common pitfalls to avoid.

Provides the most up-to-date guide to running randomized evaluations of social programs, especially in developing countries

Offers practical tips on how to complete high-quality studies in even the most challenging environments

Self-contained modules allow for easy reference and flexible teaching and learning

Comprehensive yet nontechnical

Contents pages and more (via Amazon)  &    Brief chapter summaries

The first chapter “This chapter provides an example of how a randomized evaluation can lead to large-scale change and provides a road map for an evaluation and for the rest of the book”

Book review: The impact evaluation primer you have been waiting for? Mark Goldstein, Development Impact blog. 27/11/2013

YouTube video: Book launch talk (1:21) “On 21 Nov, 2013, author of “Running Randomized Evaluations” and Executive Director of J-PAL, Rachel Glennerster, launched the new book at the World Bank. This was followed by a panel discussion with Alix Zwane, Executive Director of Evidence Action, Mary Ann Bates, Deputy Director of J-PAL North America and David Evans, Senior Economist, Office of the Chief Economist, Africa Region, World Bank, led by the Head of DIME, Arianna Legovini.”

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Working with messy data sets? Two useful and free tools

Posted on 25 April, 2014 – 5:52 PM

I have just come across two useful apps (aka software packages (aka tools)) for when you are working with someone else’s data sets and/or data sets from multiple sources and times. Or,  just your own data that was in a less than perfect state when you last left it :-)

  • OpenRefine: Initially developed by Google and now open source with its own support and development community. You can explore the characteristics of a data set, clean it in quick and comprehensive moves, transform its layout and formats, as well as reconcile and match multiple data sets. There is documentation and videos to show you how to do all this. There is also a book, which you can purchase.The wikipedia entry provides a good overview.
  • Tabula: This package allows you to extract tables of data from pdfs, a task which otherwise can be very tiresome, messy and error prone

And some other packages I have yet to explore

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)