Sustainable development: A review of monitoring initiatives in agriculture

(from DFID website)

A new report has just been released on the Review of the Evidence on Indicators, Metrics and Monitoring Systems. Led by the World Agroforestry Centre (ICRAF) under the auspices of the CGIAR Research Program on Water, Land and Ecosystem (WLE), the review examined monitoring initiatives related to the sustainable intensification of agriculture. Designed to inform future DFID research investments, the review assessed both biophysical and socioeconomic related monitoring efforts.

With the aim of generating insights to improve such systems, the report focuses upon key questions facing stakeholders today:

  1. How to evaluate alternative research and development strategies in terms of their potential impact on productivity, environmental services and welfare goals, including trade-offs among these goals?
  2. How to cost-effectively measure and monitor actual effectiveness of interventions and general progress towards achieving sustainable development objectives?

An over-riding lesson, outlined in the report, was the surprising lack of evidence for the impact of monitoring initiatives on decision-making and management. Thus, there are important opportunities for increasing the returns on these investments by better integrating monitoring systems with development decision processes and thereby increasing impacts on development outcomes. The report outlines a set of recommendations for good practice in monitoring initiatives…

DFID welcomes the publication of this review. The complexity of the challenges which face decision makers aiming to enhance global food security is such that evidence (i.e. metrics) of what is working and what is not is essential. This review highlights an apparent disconnection between what is measured and what is required by decision-makers. It also identifies opportunities for a way forward. Progress will require global co-operation to ensure that relevant data are collected and made easily accessible.

DFID is currently working with G8 colleagues on the planning for an international conference on Open Data to be held in Washington DC from 28th to 30th April 2013. The topline goal for the initiative is to obtain commitment and action from nations and relevant stakeholders to promote policies and invest in projects that open access to publicly funded global agriculturally relevant data streams, making such data readily accessible to users in Africa and world-wide, and ultimately supporting a sustainable increase in food security in developed and developing countries. Examples of the innovative use of data which is already easily available will be presented, as well as more in-depth talks and discussion on data availability, demand for data from Africa and on technical issues. Data in this context ranges from the level of the genome through the level of yields on farm to data on global food systems.


A Bibliography on Evaluability Assessment

PS: This posting and bibliography was first published in November 2012, but has been updated since then, most recently in March 2018. The bibliography now contains 150 items.

An online (Zotero) bibliography was generated in November 2012 by Rick Davies, as part of the process of developing a “Synthesis of literature on evaluability assessments” contracted by the DFID Evaluation Department

[In 2012] There are currently 133 items in this bibliography, listed by year of publication, starting with the oldest first. They include books, journal articles, government and non-government agency documents and webpages, produced between 1979 and 2012. Of these 59% described actual examples of Evaluability Assessments, 13% reviewed experiences of multiple kinds of Evaluability Assessments, 28% were expositions on Evaluability Assessments, with some references to examples, 10% were official guidance documents on how to do Evaluability Assessments and 12% were Terms of Reference for Evaluability Assessments. Almost half (44%) of the documents were produced by international development agencies.

The list is a result of a search using Google Scholar and Google Search to find documents with “evaluability” in the title. The first 100 items in the search result listing were examined. Searches were also made via PubMed, JSTOR and Sciverse. A small number of documents were also identified as a result of a request posted on the MandE NEWS, Xceval and Theory Based Evaluation email lists.

This list is open to further editing and inclusions. Suggestions should be sent to


DRAFT DFID Evaluation Policy – Learning What Works to Improve Lives

RD Comment: The policy document is a draft for consultation at this stage. The document will be revised to accommodate comments received. The aim is to have a finished product by the end of this calendar year. People who are interested to comment should do so directly to Liz Ramage by 16th November.

DRAFT FOR DISCUSSION 24 AUGUST 2012 (Pdf available here)

“This Evaluation Policy sets out the UK Government’s approach to, and standards for, independent evaluation of its Official Development Assistance (ODA).


We are publishing this evaluation policy for Official Development Assistance (ODA) at a time when the UK Government’s (the Government) approach to evaluation of international development programmes is being completely transformed.

This policy covers evaluation of all UK ODA around 87% of which is managed by the Department for International Development (DFID).  Major elements of ODA are also delivered through other Government Departments, including the Foreign and Commonwealth Office, the Department of Energy and Climate Change.

The Government is rapidly scaling up its programmes to deliver on international commitments and the Millennium Development Goals.   In doing so, the Government has made a pact with the taxpayer that this will be accompanied by greater transparency and commitment to results and measurable impact.   Evaluation plays a central part in this undertaking.

In 2011, the Independent Commission for Aid Impact (ICAI) was established, a radical change in the UK’s architecture and adopting a model which sets new standards for independence with a focus on value for money and results.  Reporting directly to Parliament, ICAI sets a new benchmark for independence in scrutiny of development programmes which applies across all UK ODA.

In parallel withICAI’s work, UK Government Departments are placing much greater emphasis on evidence and learning within programmes.

I am excited by the changes we are seeing within DFID on this initiative.  We are rapidly moving towards commissioning rigorous impact evaluations within the programmes, with much stronger links into decision making and to our major investments in policy-relevant research.

Not only has the number of specialist staff working on evaluation more than doubled, but these experts are now located within the operational teams where they can make a real improvement to programme design and delivery.

Finally, I want to note that DFID is working closely with Whitehall partners in building approaches to evaluation.  This fits well with wider changes across government, including the excellent work by the Cross-Government Evaluation Group including the updateof the Guidance for Evaluation (The Magenta book)”

Mark Lowcock, Permanent Secretary, Department for International Development




1.1      Purpose of the Policy and its Audience.

1.2      Why we need independent and high quality evaluation.


2.1      The Government’s commitment to independent evaluation.

2.2      The Independent Commission for Aid Impact

2.3      The international context for development evaluation.


3.1      Definition of evaluation.

3.2      Distinctions with other aspects of results management

3.3      Evaluation Types.


4.1      Quality.

4.2      Principles.

4.3      Standards.

4.4      Criteria.

4.5      Methods.

4.6      How to decide what to evaluate.

4.7      Resources.


5.1      Definitions and quality standards for impact evaluation.


6.1      The importance of communicating and using evaluation findings.

6.2      Timeliness.

6.3      Learning and using evidence.


7.1      A more inclusive approach to partnership working.

7.2      A stronger role for developing countries.

7.3      Partnerships with multilaterals, global and regional funds and civil society organisations.


8.1      A transformed approach to evaluation.

8.2      DFID’s co-ordinated approach to results: where evaluation fits in.

8.3      Mandatory quality processes.

8.4      Ensuring there are no evidence gaps in DFID’s portfolio.

8.5      Building capacity internally: evaluation professional skills and accreditation programme.

8.6      Roles and responsibilities for evaluation.

PS: For comparison, the previous policy document: Building the evidence to reduce poverty The UK’s policy on evaluation for international development. Department for International Development (DFID) June 2009, and the March 2009 draft version (for consultation).



Review of the use of ‘Theory of Change’ in International Development

By Isabel Vogel. Funded by DFID, 2012

Review of the use of ‘Theory of Change’ in international development (full report)
Review of the use of ‘Theory of Change’ in international development (summary)
Appendix 3: Examples of Theories of Change

1. Executive Summary
‘Theory of change’ is an outcomes-based approach which applies critical thinking to the design, implementation and evaluation of initiatives and programmes intended to support change in their contexts. It is being increasingly used in international development by a wide range of governmental, bilateral and multi-lateral development agencies, civil society organisations, international non-governmental organisations and research programmes intended to support development outcomes. The UK’s Department for International Development (DFID) commissioned this review of how theory of change is being used in order to learn from this growing area of practice. DFID has been working formally with theory of change in in its programming since 2010. The purpose was to identify areas of consensus, debate and innovation in order to inform a more consistent approach within DFID.
Continue reading “Review of the use of ‘Theory of Change’ in International Development”

UK centre of excellence for evaluation of international development

Prior Information Notice

DFID is planning to establish a Centre of Excellence to assist with our commitment to use high quality evaluation to maximise the impact UK funded international development. DFID would like to consult with a wide range of research networks and experts in the field, and invite ideas and suggestions to help develop our ideas further before formally issuing invitations to tender to the market for this opportunity. There are two main channels for interested parties to contribute to this process:

1. Comments and views on the draft scope can be fed in through the DFID supplier portal by registering for this opportunity at and accessing the documentation.

2. DFID will hold bilateral discussions and/or information sharing sessions with interested parties depending on demand.

Please ensure all comments are fed in through the DFID portal by 31st August 2012. Once the consultation process is complete and the scope of the Centre of Excellence fully defined, DFID plans to a run a competitive tender for this work. The target date for establishment of the Centre is mid 2013.

RD Comment: Why is this consultation process not more open? Why do particpants have to register as potential suppliers, when many who might be interested to read and comment on the proposal would probably not necessarily want to become suppliers?

DFID How To Note: Reviewing and Scoring Projects Introduction

November 2011. Available as pdf.

” Introduction This guidance is to help DFID staff, project partners and other stakeholders use the scoring system and complete the latest templates when undertaking an Annual Review (AR) or Project Completion Review (PCR – but formerly known as Project Completion Report)) for projects due for review from January 2012.  This guidance applies to all funding types however separate templates are available for core contributions to multilateral organisations.  The guidance does not attempt to cover in detail how to organise the review process, although some help is provided.

Principal changes from previous templates        2
Introduction        2
What is changing?        3
What does it involve?        4
Using the logframe as a monitoring tool        5
If you don?t have a logframe        6
Assessing the evidence base        6
The Scoring System        6
Updating ARIES        7
Transparency and Publishing AR?s and PCR?s       7
Projects below £1m approved prior to the new Business Case format   8
Multilateral Core Contributions        9
Filling in the templates/ Guidance on the template contents     9
Completing the AR/PCR and information onto ARIES     19
Annex A:  Sample Terms of Reference ”

RD Comment: To my surprise, although this How To Note gives advice on how to assign weights to each outputs, it does not explain how these interact with output scores to generate a weighted achievement score for each output. Doing so would help explain why the weightings are being requested. At present weightings are requested but their purpose is not explained.

The achievement scoring system is a definite improvement on the previous system. The focus is now on actual achievement to date rather than expected achievement by the end of the project, and the scale is evenly balanced, with the top and bottom of the scale representing over and under-achievement respectively.

Models of Causality and Causal Inference

by BarbaraBefani. An annex to BROADENING THE RANGE OF DESIGNS AND METHODS FOR IMPACT EVALUATIONS. Report of a study commissioned by the Department for International Development, APRIL 2012 ,  by Elliot Stern (Team Leader), Nicoletta Stame, John Mayne, Kim Forss, Rick Davies, Barbara Befani


The notion of causality has given rise to disputes among philosophers which still continue today. At the same time, attributing causation is an everyday activity of the utmost importance for humans and other species, that most of us carry out successfully outside the corridors of academic departments. How do we do that? And what are the philosophers arguing about? This chapter will attempt to provide some answers, by reviewing some of the notions of causality in the philosophy of science and “embedding” them into everyday activity. It will also attempt to connect these with impact evaluation practices, without embracing one causation approach in particular, but stressing strengths and weaknesses of each and outlining how they relate to one another. It will be stressed how both everyday life, social science and in particular impact evaluation have something to learn from all these approaches, each illuminating on single, separate, specific aspects of the relationship between cause and effect. The paper is divided in three parts: the first addresses notions of causality that focus on the simultaneous presence of a single cause and the effect; alternative causes are rejected depending on whether they are observed together with effect. The basic causal unit is the single cause, and alternatives are rejected in the form of single causes. This model includes multiple causality in the form of single independent contributions to the effect. In the second part, notions of causality are addressed that focus on the simultaneous presence of multiple causes that are linked to the effect as a “block” or whole: the block can be either necessary or sufficient (or neither) for the effect, and single causes within the block can be necessary for a block to be sufficient (INUS causes). The third group discusses models of causality where simultaneous presence is not enough: in order to be defined as such, causes need to be shown to actively manipulate / generate the effect, and focus on how the effect is produced, how the change comes about. The basic unit here – rather than a single cause or a package – is the causal chain: fine-grained information is required on the process leading from an initial condition to the final effect.

The second type of causality is something in-between the first and third: it is used when there is no finegrained knowledge on how the effect is manipulated by the cause, yet the presence or absence of a number of conditions can be still spotted along the causal process, which is thus more detailed than the bare “beginning-end” linear representation characteristic of the successionist model.


RD Comment: I strongly recommend this paper

For more on necessary and/or sufficient conditions see this blog posting which shows how different combinations of causal conditions can be visually represented and recognised, using Decision Trees


DFID’s Approach to Impact Evaluation – Part I

[from Development Impact: News, views, methods, and insights from the world of impact evaluation.  Click here to view full story.
As part of a new series looking how institutions are approaching impact evaluation, DI virtually sat down with Nick York, Head of Evaluation and Gail Marzetti, Deputy Head, Research and Evidence Division
Development Impact (DI): There has been an increasing interest in impact evaluation (defined as experimental/quasi-experimental analysis of program effects) in DFID. Going forward, what do you see as impact evaluation’s role in how DFID evaluates what it does? How do you see the use of impact evaluation relative to other methods?  
Nick YorkThe UK has been at the forefront among European countries in promoting the use of impact evaluation in international development and it is now a very significant part of what we do – driven by the need to make sure our decisions and those of our partners are based on rigorous evidence.   We are building in prospective evaluation into many of our larger and more innovative operational programmes – we have quite a number of impact evaluations underway or planned commissioned from our country and operational teams. We also support international initiatives including 3ie where the UK was a founder member and a major funder, the Strategic Impact Evaluation Fund with the World Bank on human development interventions and NONIE, the network which brings together developing country experts on evaluation to share experiences on impact evaluation with professionals in the UN, bilateral and multilateral donors.
DI: Given the cost of impact evaluation, how do you choose which projects are (impact) evaluated?
NY:  We focus on those which are most innovative – where the evidence base is considered to be weak and needs to be improved – and those which are large or particularly risky. Personally, I think the costs of impact evaluation are relatively low compared to the benefits they can generate, or compared to the costs of running programmes using interventions which are untested or don’t work.   I also believe that rigorous impact evaluations generate an output – high quality evidence – which is a public good so although the costs to the commissioning organization can be high they represent excellent value for money for the international community. This is why 3ie, which shares those costs among several organisations, is a powerful concept.

MPs report on Department for International Development Financial Management

The Commons Public Accounts Committee publishes its 52nd report of Session 2010-12, on the basis of evidence from the Department for International Development (DfID).

“The Rt Hon Margaret Hodge MP, Chair of the Committee of Public Accounts, said:

“The amount DfID spends on aid will rise by 35% by 2013, but at the same time the Department has to cut its overall running costs by a third.
Achieving this level of savings at a time of rapid expansion in frontline services involve a substantial challenge if taxpayers’ money is to be properly protected and value for money secured. [emphasis added]

The Department is going to be spending more in fragile and conflict-affected countries and the danger to the taxpayer is that there could be an increase in fraud and corruption. However, the Department could not even give us information as to the expected levels of fraud and corruption and the action they were taking to mitigate it.

Unfortunately, the Department has not always kept its eye on the financial ball, and in 2010 stopped monitoring its finance plan. That must not happen again and DFID should report publicly on its financial management.

The Department’s ability to make informed spending decisions is undermined by its poor understanding of levels of fraud and corruption. Its current approach is too reactive and it needs to develop a sound framework for making sure funds are spent properly on the ground. This will be even more important as the Department channels more of its funding into fragile and conflict-affected states.

The Department’s current plan is to spend more via multilateral organizations and less through bilateral programmes. This poses a risk to value for money because the Department will have less oversight than it does over country-to-country programmes. Indeed, we are concerned that the strategy has more to do with the fact that it is easier to spend through multilaterals than go through the process of assessing value for money of bilateral programmes. [emphasis added]

To maximise the amount of aid that gets through to the frontline, DfID should have clear plans for how it is going to reduce or control running costs – particularly when channelling funding through partner and multilateral organizations with a management overhead at every stage.”[emphasis added]

Margaret Hodge was speaking as the committee published its 52nd Report of this Session which, on the basis of evidence from the Department for International Development, examined its financial management capability, its increasing focus on value for money, and the challenges it faces in managing its increasing programme budget while reducing its overall running costs.”

RD Comment: See Rick on the Road blog posting Thursday, July 24, 2008: An aid bubble? – Interpreting aid trends which raises the same issues as highlighted in bold above.

See also HoC International Development Committee, Committee Room 15 Working effectively in fragile and conflict-affected states: DRC, Rwanda and Burundi

Monitoring Policy Dialogue: Lessons From A Pilot Study

By Sadie Watson And Juliet Pierce. September 2008. DEPARTMENT FOR INTERNATIONAL DEVELOPMENT. Evaluation Report WP27

Executive Summary

In 2007, a tool and process was developed for improving the recording and impact of policy dialogue initiatives across DFID. It was based on an adaptation of current project cycle management (PCM) requirements for programme spending. A pilot was devised to test the proposed tool and process in terms of:

• Assessing the value in recording and monitoring policy related activities in a similar way to that of spend activities;

• Finding the most effective and useful approach in terms of process;

• Identifying succinct ways to capture intentions and to measuring performance;

• Clarifying the type and level of support and guidance required to roll the process out across DFID.

The ten participating pilot teams represented different aspects of DFID’s policy work, conducting different types of policy dialogue activities. The consultants were asked to monitor and evaluate the six month pilot. They were also asked to review approaches to managing and monitoring policy dialogue and influencing activities in other organisations. This report highlights some lessons and observations from the pilot. It outlines some emerging issues and provides some pointers for DFID to consider as it continues to develop into an organisation where policy dialogue and influencing are increasingly important aid tools.
Continue reading “Monitoring Policy Dialogue: Lessons From A Pilot Study”

%d bloggers like this: