Open consultation Triennial review of the Independent Commission for Aid Impact (ICAI)

(Website that hosts the text below)

This consultation closes on 26 April 2013

On 21 March the Government announced the triennial review of the Independent Commission for Aid Impact (ICAI) and is seeking views of stakeholders who wish to contribute to the Review. Triennial Reviews of Non-Departmental Public Bodies (NDPBs) are part of the Government’s commitment to review all NDPBs, with the aim of increasing accountability for actions carried out on behalf of the State.

The ICAI’s strategic aim is to provide independent scrutiny of UK aid spending, to promote the delivery of value for money for British taxpayers and to maximise the impact of aid.

The Review will be conducted in line with Cabinet Office principles and guidance, in two stages.

The first stage will:

  • Identify and examine the key functions of the ICAI and assess how these functions contribute to the core business of DFID;
  • Assess the requirement for these functions to continue given other scrutiny processes;
  • If continuing, assess how the key functions might best be delivered; if one of these options is continuing delivery through the ICAI, then make an assessment against the Government’s “three tests”: technical function; political impartiality; and the need for independence from Ministers.

If the outcome of stage one is that delivery should continue through the ICAI, the second stage of the review will:

  • Review whether ICAI is operating in line with the recognised principles of good corporate governance, using the Cabinet Office “comply or explain” standard approach.

In support of these aims we would welcome input and evidence from stakeholders, focused on these main questions:

ICAI’s functions

For the purposes of this review, we have defined ICAI’s key functions as follows:

  • Produce a wide range of independent, high quality/professionally credible and accessible reports (including evaluations, VfM reviews, investigations) setting out evidence of the impact and value for money of UK development efforts;
  • Work with and for Parliament to help hold the UK Government to account for its development programme, and make information on this programme available to the public;
  • Produce appropriately targeted recommendations to be implemented/ followed up by HMG.

Which of these functions do you think are still needed? What would be the impact if ICAI ceased to exist?

Would you define ICAI’s functions differently?

Do you think any of the following delivery mechanisms would be more appropriate or cost effective at delivering these functions: Local government, the voluntary sector, private sector, another existing body or DFID itself?

To date, do you think ICAI has focused on scrutinising UK aid spend or the wider HMG development effort? What do you think it should be doing?

Where do you think ICAI sits on the spectrum between audit and research? Is this where they should be?

How far can and should ICAI have a role in holding HMG to account?

Production of reports

What is the quality of ICAI reports? Is the expertise of those producing the reports appropriate? How does this compare to other scrutiny bodies that you know of?

How far does the methodology used by ICAI add value to other scrutiny of DFID programmes (eg IDC, NAO, DFID internal)?

How far does ICAI involve beneficiaries in its work?

What impact have ICAI reviews had on DFID staff and resources?

How independent do you believe ICAI is? How important do you think this independence is for ICAI’s ability to deliver its functions effectively?

How much of an impact do you think the Commissioners have on ICAI recommendations and reports? What added value do you think they bring? Do they have the right skillset?

Making information available to the IDC and the public

How important do you think ICAI’s role is in providing information about UK development to taxpayers?

What impact has ICAI had on public perceptions of UK development?

Production of targeted recommendations

What has been the added value of ICAI’s recommendations? How do these compare to other scrutiny bodies that you know of?

How far and why have recommendations been followed up?

What impact has ICAI had on DFID’s own approach to monitoring impact and value for money?

How far has ICAI promoted lesson learning in DFID?

General

Do you think ICAI could improve? If so, how do you think ICAI could improve?

Do you have any other comments?

The government is seeking views of stakeholders on the Triennial Review of the Independent Commission for Aid Impact (ICAI).

Contact us by 26 April 2013

Write to us:

email
post
ICAI Review Team KB 2.2
22 Whitehall
London
SW1A 2EG

Scaling Up What Works: Experimental Evidence on External Validity in Kenyan Education

Centre for Global Development Working Paper 321 3/27/13 Tessa Bold, Mwangi Kimenyi, Germano Mwabu, Alice Ng’ang’a, and Justin Sandefur
Available as pdf

Abstract

The recent wave of randomized trials in development economics has provoked criticisms regarding external validity. We investigate two concerns—heterogeneity across beneficiaries and implementers—in a randomized trial of contract teachers in Kenyan schools. The intervention, previously shown to raise test scores in NGO-led trials in Western Kenya and parts of India, was replicated across all Kenyan provinces by an NGO and the government. Strong effects of shortterm contracts produced in controlled experimental settings are lost in weak public institutions: NGO implementation produces a positive effect on test scores across diverse contexts, while government implementation yields zero effect. The data suggests that the stark contrast in success between the government and NGO arm can be traced back to implementation constraints and political economy forces put in motion as the program went to scale.

Rick Davies comment: This study attends to two of the concerns I have raised in a  recent blog (My two particular problems with RCTs) – (a) the neglect of important internal variations in performance arising from a focus on average treatment effects, (b) the neglect of the causal role of contextual factors (the institutional setting in this case) which happens when the context is in effect treated as an externality.

It reinforces my view of the importance of a configurational view of causation.  This kind of analysis should be within the reach of experimental studies as well as methods like QCA. For years agricultural scientists have devised and used factorial designs (albeit using fewer factors than the number of conditions found in most QCA studies)

On this subject I came across this relevant quote from R A Fisher: “

If the investigator confines his attention to any single factor we may infer either that he is the unfortunate victim of a doctrinaire theory as to how experimentation should proceed, or that the time, material or equipment at his disposal is too limited to allow him to give attention to more than one aspect of his problem…..

…. Indeed in a wide class of cases (by using factorial designs) an experimental investigation, at the same time as it is made more comprehensive, may also be made more efficient if by more efficient we mean that more knowledge and a higher degree of precision are obtainable by the same number of observations.”

And also, from Wikipedia, another Fisher quote:

“No aphorism is more frequently repeated in connection with field trials, than that we must ask Nature few questions, or, ideally, one question, at a time. The writer is convinced that this view is wholly mistaken.”

And also

3ie Public Lecture: What evidence based development has to learn from evidence based medicine? What we have learned from 3ie’s experience in evidence based development

Speaker: Chris Whitty, LSHTM & DFID
Speaker: Howard White, 3ie
Date and time: 15 April 2013, 5.30 – 7.00 pm
Venue: John Snow Lecture Theatre A&B, London School of Hygiene & Tropical Medicine, Keppel Street, London, UK

Evidence-based medicine has resulted in better medical practices saving hundreds of thousands of lives across the world. Can evidence-based development achieve the same? Critics argue that it cannot. Technical solutions cannot solve the political problems at the heart of development. Randomized control trials cannot unravel the complexity of development. And these technocratic approaches have resulted in a focus on what can be measured rather than what matters. From the vantage point of a medical practitioner with a key role in development research, Professor Chris Whitty will answer these critics, pointing out that many of the same objections were heard in the early days of evidence-based medicine. Health is also complex, a social issue as well as a technical one. So what are the lessons from evidence-based medicine for filling the evidence gap in development?

The last decade has seen a rapid growth in the production of impact evaluations. What do they tell us, and what do they not? Drawing on the experience of over 100 studies supported by the 3ie Professor Howard White presents some key findings about what works and what doesn’t, with examples of how evidence from impact evaluations is being used to improve lives. Better evaluations will lead to better evidence and so better policies. What are the strengths and weaknesses of impact evaluations as currently practiced, and how may they be improved?

Chris Whitty is a clinical epidemiologist and Chief Scientific Advisor and Director Research & Evidence Division, UK Department for International Development (DFID). He is professor of International Health at LSHTM and prior to DFID he was Director of the LSHTM Malaria Centre and on Board of various other organisations.

Howard White is the Executive Director of 3ie, co-chair of the Campbell International Development Coordinating Group, and Adjunct Professor, Alfred Deakin Research Institute, Geelong University. His previous experience includes leading the impact evaluation programme of the World Bank’s Independent Evaluation Group and before that, several multi-country evaluations.

Phil Davies is Head of the London office of 3ie. He has responsibilities for 3ie’s Systematic Reviews programme. Prior to 3ie he was the Executive Director of Oxford Evidentia, ahas also served as a senior civil servant in the UK Cabinet Office and HM Treasury, responsible for policy evaluation and analysis.

First come first serve. Doors open at 5:15 pm
More about 3ie: www.3ieimpact.org

The (endangered) art of monitoring in development programmes

by Murray Boardman, Overseas Programme Manager, Save the Children New Zealand,
CID Talk: 20 June 2012Available as pdf (and being published as a full paper in the near future)

A summary of the presentation contents:
“Within development, monitoring and evaluation are as ubiquitous as salt and pepper.  Often development talks about monitoring and evaluation as a united term, rather than them being separate and unique processes along a quality framework continuum.  Due to various factors within development, there are some concerns that the evaluation frame is dominating, if not consuming, monitoring.
Given that monitoring is a fundamental component of development programming, any failure to adequately monitor projects will, inevitably, lead to increase costs and also reduces the effectiveness and quality of project outcomes.  Evidence of such occurrences is not isolated.
The attached presentation was given to a seminar for NGOs in New Zealand in June 2012.  It is largely based on a similar presentation given for a guest lecture at Massey University in October 2011.  It presents various observations – some of which are challenging – on the current dynamics between monitoring and evaluation and how evaluations are dominating the quality area of development.  The objective of this presentation is to not to demote or vilify evaluations, rather it is to promote and enhance monitoring as an essential skill set in order to ensure programme quality is continuously improved.

Rick Davies’ comment: A recommended read and a breath of fresh air. Are there are power differentials at work here, behind the problems that Murray identifies?. Who has more status and influence? Those who responsible for project monitoring or those responsible for evaluations?

See also: Daniel Ticehurst’s paper on monitoring: Who is listening to whom, and how well and with what effect?   Daniel Ticehurst, October 16th, 2012. 34 pages

 

Sustainable development: A review of monitoring initiatives in agriculture

(from DFID website)

A new report has just been released on the Review of the Evidence on Indicators, Metrics and Monitoring Systems. Led by the World Agroforestry Centre (ICRAF) under the auspices of the CGIAR Research Program on Water, Land and Ecosystem (WLE), the review examined monitoring initiatives related to the sustainable intensification of agriculture. Designed to inform future DFID research investments, the review assessed both biophysical and socioeconomic related monitoring efforts.

With the aim of generating insights to improve such systems, the report focuses upon key questions facing stakeholders today:

  1. How to evaluate alternative research and development strategies in terms of their potential impact on productivity, environmental services and welfare goals, including trade-offs among these goals?
  2. How to cost-effectively measure and monitor actual effectiveness of interventions and general progress towards achieving sustainable development objectives?

An over-riding lesson, outlined in the report, was the surprising lack of evidence for the impact of monitoring initiatives on decision-making and management. Thus, there are important opportunities for increasing the returns on these investments by better integrating monitoring systems with development decision processes and thereby increasing impacts on development outcomes. The report outlines a set of recommendations for good practice in monitoring initiatives…

DFID welcomes the publication of this review. The complexity of the challenges which face decision makers aiming to enhance global food security is such that evidence (i.e. metrics) of what is working and what is not is essential. This review highlights an apparent disconnection between what is measured and what is required by decision-makers. It also identifies opportunities for a way forward. Progress will require global co-operation to ensure that relevant data are collected and made easily accessible.

DFID is currently working with G8 colleagues on the planning for an international conference on Open Data to be held in Washington DC from 28th to 30th April 2013. The topline goal for the initiative is to obtain commitment and action from nations and relevant stakeholders to promote policies and invest in projects that open access to publicly funded global agriculturally relevant data streams, making such data readily accessible to users in Africa and world-wide, and ultimately supporting a sustainable increase in food security in developed and developing countries. Examples of the innovative use of data which is already easily available will be presented, as well as more in-depth talks and discussion on data availability, demand for data from Africa and on technical issues. Data in this context ranges from the level of the genome through the level of yields on farm to data on global food systems.

 

Improving the Evaluability of INGO Empowerment and Accountability Programmes

Shutt, C. and McGee, R. CDI Practice Paper 1 March 2013 Publisher IDS Available as pdf (109kb)

Abstract
This CDI Practice Paper is based on an analysis of international NGO (INGO) evaluation practice in empowerment and accountability (E&A) programmes commissioned by CARE UK, Christian Aid, Plan UK and World Vision UK. It reviews evaluation debates and their implications for INGOs. The authors argue that if INGOs are to successfully ‘measure’ or assess outcomes and impacts of E&A programmes, they need to shift attention from methods to developing more holistic and complexity-informed evaluation strategies during programme design. Final evaluations or impact assessments are no longer discrete activities, but part of longer-term learning processes. Given the weak evaluation capacity within the international development sector, this CDI Practice Paper concludes that institutional donors must have realistic expectations and support INGOs to develop their evaluation capacity in keeping with cost–benefit considerations. Donors might also need to reconsider the merits of trying to evaluate the ‘impact’ of ‘demand-side’ NGO governance programmes independently of potentially complementary ‘supply-side’ governance initiatives.

See also: Tools and Guidelines for Improving the Evaluability of INGO Empowerment and Accountability Programmes Centre for Development Impact, Practice paper. No.1 Annex March 2013

Livestreaming of the Impact, Innovation & Learning conference, 26-27 March 2013

(via Xceval)

Dear Friends
You may be interested in following next week’s Impact, Innovation and Learning conference, whose principle panel sessions are being live-streamed. Keynote speakers and panellists include:
  • Bob Bob Picciotto (King’s College, UKES, EES), Elliot Stern, Editor of ‘Evaluation’, Bruno Marchal (Institute of Tropical Medicine, Antwerp), John Grove (Gates Foundation), Ben Ramalingan (ODI) ,Aaron Zazueta (GEF),Peter Loewe (UNIDO), Martin Reynolds (Open University),Bob Williams (Bob Williams), Richard Hummelbrunner (OAR), Patricia Rogers (Royal Melbourne Inst of Technology), Barbara Befani (IDS, EES), Laura Camfield and Richard Palmer-Jones (University of East Anglia), Chris Barnett (ITAD/IDS), Giel Ton (University of Wagenigen) ,John Mayne, Jos Vaessen (UNESCO), Oscar Garcia (UNDP), Lina Payne (DFID), Marie Gaarder (World Bank), Colin Kirk (UNICEF), Ole Winckler Andersen (DANIDA)

Impact, Innovation and Learning – live-streamed event, 26-27 March 2013

Current approaches to the evaluation of development impact represent only a fraction of the research methods used in political science, sociology, psychology and other social sciences. For example, systems thinking and complexity science, causal inference models not limited to counterfactual analysis, and mixed approaches with blurred ‘quali-quanti’ boundaries, have all shown potential for application in development settings. Alongside this, evaluation research could be more explicit about its values and its learning potential for a wider range of stakeholders. Consequently, a key challenge in evaluating development impact is mastering a broad range of approaches, models and methods that produce evidence of performance in a variety of interventions in a range of different settings.
The aim this event, which will see the launch of the new Centre for Development Impact (www.ids.ac.uk/cdi), is to shape a future agenda for research and practice in the evaluation of development impact. While this is an invitation-only event, we will be live-streaming the main presentations from the plenary sessions and panel discussions. If you would like to register watch any of these sessions online, please contact Tamlyn Munslow in the first instance at t.munslow@ids.ac.uk.
More information at:
http://www.ids.ac.uk/events/impact-innovation-and-learning-towards-a-research-and-practice-agenda-for-the-future If you are unable to watch the live-streamed events, there will be an Watch Again option, after the conference.
With best wishes,
Emilie Wilson
Communications Officer
Institute of Development Studies

Rick Davies comment 28 March 2013: Videos of 9 presentations and panels are now available online at http://www.ustream.tv/recorded/30426381

Evaluability – is it relevant to EBRD? (and others)

EBRB Evaluation Brief, June 2012 by Keith Leonard, Senior Adviser (EvD), Amelie Eulenberg, Senior Economist (EvD) Available as pdf.

RD comment: A straightforward and frank analysis.

CONTENTS
Conclusions and recommendations
1. Purpose and structure of the paper
2. Evaluability and why it matters
2.1 What is evaluability?
2.1.1 Expression of expected results
2.1.2 Indicators
2.1.3 Baseline
2.1.4 Risks
2.1.5 Monitoring
2.2 How and by whom is evaluability assessed?
2.3 Why evaluability matters
2.3.1 Relationship between evaluability and project success
2.3.2 More reliable and credible evaluations
2.3.3 Telling the story of results
2.4 What is quality-at-entry and how does it differ from evaluability?
3. How other IFIs use evaluability
3.1 Asian Development Bank
3.2 Inter-American-Development Bank
3.3 International Finance Corporation, World Bank Group
4. Current practice in the EBRD
4.1 Structure of Final Review Memorandum
4.2 EvD evaluation of the Early Transition Country Initiative
4.3 EvD synthesis of findings on a decade of evaluations of technical cooperation
4.4 Grant Co-financing Strategic Review
4.5 The findings of the Besley Report

EvalPartners International Forum on Civil Society’s Evaluation Capacities (report on)

3-6 December 2012, Chiang Mai, Thailand. Available as pdf

Exec Summary excerpt: The EvalPartners International Forum on Civil Society’s Evaluation Capacities, co-sponsored by the International Organisation for Cooperation in Evaluation (IOCE) and the United Nation’s Children Fund (UNICEF), was held December 3-6, 2012 in Chiang Mai, Thailand with the intention of enhancing the role of civil society to support equity-focused and gender-responsive country-led evaluation systems. The forum, attended by 80 high-level evaluation professionals  representing 37 countries, included regional and national presidents and chairs of voluntary organizations for professional evaluation (VOPEs), and directors of evaluation from various bilaterals, multilaterals, and government ministries. The associated discussions represented the first assembly of all regional and national VOPE presidents, all of whom expressed formal commitment to the goal of establishing an international partnership and movement to strengthen civil society and capacities  of VOPEs.

Contents
1. Opening Remarks
2. EvalPartners and National Evaluation Capacity Development
3. The Role of VOPEs in Influencing an Enabling Environment for Evaluation
4. Working Group Summaries
5. Institutional Capacities in Voluntary Organizations for Professional Evaluation
6. Institutionalizing Sustainable Learning Strategies
7. Equity-Focused and Gender-Responsive Evaluation
8. Panel Discussions

Set-Theoretic Methods for the Social Sciences: A Guide to Qualitative Comparative Analysis

Carsten Q. Schneider, Claudius Wagemann Cambridge University Press, 31 Aug 2012 – Political Science – 392 pages Available on Amazon and Google Books

Publishers blurb: “Qualitative Comparative Analysis (QCA) and other set-theoretic methods distinguish themselves from other approaches to the study of social phenomena by using sets and the search for set relations. In virtually all social science fields, statements about social phenomena can be framed in terms of set relations, and using set-theoretic methods to investigate these statements is therefore highly valuable. This book guides readers through the basic principles of set theory and then on to the applied practices of QCA. It provides a thorough understanding of basic and advanced issues in set-theoretic methods together with tricks of the trade, software handling and exercises. Most arguments are introduced using examples from existing research. The use of QCA is increasing rapidly and the application of set-theory is both fruitful and still widely misunderstood in current empirical comparative social research. This book provides an invaluable guide to these methods for researchers across the social sciences.”
Book reviews:
%d bloggers like this: