Commons Select Committee to Scrutinise the DFID’s Annual Report & Resource Accounts

13 September 2011

“The International Development Committee is to conduct an inquiry into the Department for International Development’s Annual Report and Accounts 2010-11 and the Department’s Business Plan 2011-15.

Invitation to submit Written Evidence

The Committee will be considering

  • Changes since the election to DFID’s, role, policies, priorities and procedures;
  • The implications of changes for management styles, structures, staffing competences and capacity to deliver; and
  • The overall impact on the efficiency, effectiveness and cost-effectiveness of DFID’s activities.

The Committee invites short written submissions from interested organisations and individuals, especially on the following areas: the implementation of the structural reform plan: the bilateral, multilateral and humanitarian reviews; DFID administration costs; expenditure on, and dissemination of research; and the use of technical assistance and consultants.

The deadline for submitting written evidence is Monday 10 October 2011. A guide for written submissions to Select Committees may be found on the parliamentary website at:

Committee Membership is as follows: Malcolm Bruce MP, Chair (Lib Dem, Gordon), Hugh Bayley MP (Lab, City of York), Richard Burden MP (Lab, Birmingham, Northfield), Sam Gyimah MP (Con, East Surrey), Richard Harrington MP (Con, Watford), Pauline Latham MP (Con, Mid Derbyshire), Jeremy Lefroy MP (Con, Stafford), Michael McCann MP (Lab, East Kilbride, Strathaven and Lesmahagow), Alison McGovern MP (Lab, Wirral South), Anas Sarwar MP (Lab, Glasgow Central), Chris White MP (Con, Warwick and Leamington).
Specific Committee Information: / 020 7219 1223/ 020 7219 1221
Media Information: / 020 7219 3297 Committee Website:

DFID&UKES Workshop on Development and Evaluation: Practical Ways Forward.


Venue: BIS Conference Centre, Victor ia, London


  • To examine the key contributions of evaluation to international development
  • To provide an update on the accountability framework for evaluation in the UK
  • To explore the role of professional development in building evaluation capacity

THIS ONE DAY EVENT will raise important issues in the world of development and evaluation. The workshop will offer the chance to hear from senior practitioners and will cover the theory and reality as experienced in many contexts. It will update the accountability framework with particular reference to HM Treasury Guidance for Evaluation (the Magenta Book).

A major challenge for organisations is to develop their own staff as evaluation professionals. UKES will offer international insights as well as an update on its own guidance. DFID will report on how it is going about building its own community of evaluators. These will be presented alongside those from the NGO and voluntary sector. The day is relevant to all individuals and organisations with an interest and experience of development and evaluation, including: Donors, Consultants, Public and private sector representatives, Academics, A wide range of professionals

The workshop will commence at 09.00 and close at 17.30.
Highlights will include:

  • Updates on the Independent Commission for Aid Impact (ICAI),  HM Treasury’s Magenta Book and the Cross Government Evaluation Group (CGEG)
  • How to evaluate in fragile states, conlict environments and other challenging situations
  •  Case studies of evaluation at different levels: national and local,  sector specific
  • How to build professional capacity: use of accreditation and adapting to it a range of organisations at government and civil society level

The workshop will be held at the BIS Conference Centre, 1 Victoria, Street, London SW1H OET.
The registration fees are as follows:
UKES members  £75.00 + VAT
Non-members  £100.00 + VAT
Registration and the full programme for the workshop are available from the website
For any further information, contact the workshop administrators:
Professional Brieings
37 Star Street
Hertfordshire SG12 7AA
01920 487672

RCTs for empowerment and accountability programmes

A GSDRC Helpdesk Research Report, Date: 01.04.2011, 14 pages, available as pdf.

Query: To what extent have randomised control trials been used to successfully measure the results of empowerment and accountability processes or programmes?
Enquirer: DFID
Helpdesk response
Key findings: This report examines the extent to which RCTs have been used successfully to measure empowerment and accountability processes and programmes. Field experiments present immense opportunities, but the report cautions that they are more suited to measuring short-term results with short causal chains and less suitable for complex interventions. The studies have also demonstrated divergent results, possibly due to different programme designs. The literature highlights that issues of scale, context, complexity, timeframe, coordination and bias in the selection of programmes also determine the degree of success reported. It argues that researchers using RCTs should make more effort to understand contextual issues, consider how experiments can be scaled up to measure higher-order processes, and focus more on learning. The report suggests strategies such as using qualitative methods, replicating studies in different contexts and using randomised methods with field activities to overcome the limitations in the literature.
1. Overview
2. General Literature (annotated bibliography)
3. Accountability Studies (annotated bibliography)
4. Empowerment Studies (annotated bibliography)


DPC Policy Discussion Paper: Evaluating Influencing Strategies and Interventions

A paper to the DFID Development Policy Committee. Available as pdf  June 2011

“1 The Strategy Unit brief of April 2008 envisaged that DFID should become more systematic in planning and implementing influencing efforts. Since then, procedures and guidance have been developed and there is an increasingly explicit use of influencing objectives in project log frames and more projectisation of influencing efforts. Evaluation studies and reports have illustrated the wide variety of DFID influencing efforts and the range of ambition and resources involved in trying to generate positive changes in the aid system or in partner countries. These suggest that being clear and realistic about DFID’s influencing objectives, the stakeholders involved and the specific changes being sought, is the fundamental requirement for an effective intervention. It is also the basis for sound monitoring and evaluation.
2 To support this initiative, the Evaluation Department organised a series of workshops in 2009 and 2010 to further develop the measurement and evaluation of influencing interventions producing a draft How to Note with reference to multilateral organisations in September 2010. However, with the changes to DFID’s corporate landscape in 2010 and early 2011 this work was put on hold pending the conclusion of some key corporate pieces of work .
3. An increase in demand for guidance is also noted given the changing external environment. DFID is now positioning itself to address the demands of the changing global aid landscape with new initiatives, such as the Global Development Partnerships programme. This has a relatively small spend, however its success will be measured largely by the depth and reach of its influence.
4. The Evaluation Department is now seeking guidance on how important the Development Policy Committee considers the evaluation of influencing interventions, and the direction in which it would like this developed.
5. This Paper sets out why evaluation of influencing interventions is important, why now, key theories of change and an influencing typology, value for money of an influencing intervention and metrics, and finally , the challenges of measuring influence.”

See also the associated “Proposed Influencing Typology”

The paper also refers to “Appraising, Measuring and Monitoring Influencing: How Can DFID Improve?” by the DFID Strategy Unit April 2008, which does not seem to be available on the web.

RD Comment: I understand that this is considered as a draft document and that comments on it would be welcomed. Please feel free to make your comments below

Good Enough Guide to Impact Measurement – Rapid Onset Natural Disasters

[from the Emergency Capacity Building Project website]

Published on 6 April 2011

The Department for International Development (DfID / UKAID) awarded a grant for the ECB Project to develop a new Good Enough Guide to Impact Measurement. Lead by Dr. Vivien Walden from Oxfam, a team of ECB specialists from CRS, Save the Children, and World Vision will work together with the British University of East Anglia (UEA).

This guide, and supporting capacity-building materials, will include the development of an impact measurement methodology for rapid onset natural disasters. The methodologies will be field tested by the editorial team in Pakistan and one other country location from September 2011 onwards.

The team welcomes suggestions and input on developing methodologies for impact measurement. Contact us with your ideas at

Synthesis Study of DFID’s Strategic Evaluations 2005 – 2010


A report produced for the Independent Commission for Aid Impact
by Roger Drew, January 2011. Available as pdf.


S1. This report examined central evaluations of DFID’s work published from 2006 to 2010. This included:
– 41 reports of the International Development Committee (IDC)
– Two Development Assistance Committee (DAC) peer reviews
– 10 National Audit Office (NAO) reports
– 63 reports of evaluations from DFID’s Evaluation Department (EVD)

S2. These evaluations consisted of various types:
– Studies of DFID’s work overall (16%)
– Studies with a geographic focus (46%)
– Studies of themes or sectors (19%)
– Studies of how aid is delivered (19%) (see Figure 1)

S3. During this period, DFID’s business model involved allocating funds through divisional programmes. Analysis of these evaluation studies according to this business model shows that:
– Across regional divisions, the amount of money covered per study varied from £63 million in Europe and Central Asia to £427 million in East and Central Africa.
– Across non-regional divisions, the amount of money covered per study varied from £84 million in Policy Division to £5,305 million in Europe and Donor Relations (see Figure 2).

S4. Part of the explanation of these differences is that the evaluations studied form only part of the overall scrutiny of DFID’s work. In particular, its policy on evaluation commits DFID to rely on the evaluation systems of partner multilateral organisations for assessment of the effectiveness and efficiency of multilateral aid. No central reviews of data generated through those systems were included in the documents reviewed for this study. The impact of DFID’s Bilateral and Multilateral Aid Reviews was not considered, as the Reviews had not been completed by the time this study was undertaken.

S5. The evaluations reviewed had a strong focus on DFID’s bilateral aid programmes at country level. There was a good match overall between the frequency of studying countries and the amount of DFID bilateral aid received (see Table 4). Despite the growing focus on fragile states, such countries were still less likely to be studied than non-fragile countries. Countries that received large amounts of DFID bilateral aid not evaluated in the last five years included Tanzania, Iraq and Somalia (see Table 5). Regional programmes in Africa also received large amounts of DFID bilateral aid but were not centrally evaluated. Country programme evaluations did not consider DFID’s multilateral aid specifically. None of the evaluations reviewed considered why the distribution of DFID’s multilateral aid by country differs so significantly from its bilateral aid. For example, Turkey is the single largest recipient of DFID multilateral aid but receives almost nothing bilaterally (see Table 7).

S6. The evaluations reviewed covered a wide range of thematic, sectoral and policy issues (see Figure 3). These evaluations were, however, largely standalone exercises rather than drawing either retrospectively on data gathered in other evaluations or prospectively including questions into proposed evaluations. More use could have been made of syntheses of country programme evaluations for this purpose.

S7. The evaluations explored in detail the delivery of DFID’s bilateral aid and issues of how aid could be delivered more effectively. The evaluations covered the provision of multilateral aid in much less detail (see paragraph S4). One area not covered in the evaluations is the increasing use of multilateral organisations to deliver bilateral aid programmes. This more than trebled from £389 million in 2005/6 to £1.3 billion in 2009/10 and, by 2009/10, was more than double the amount being provided as financial aid through both general and sectoral budget support combined.

[RD comment:  I had the impression that DFID, like many bilateral donors, does very few ex-post evaluations, so I wanted to find out how correct this view was. I searched for “ex-post” and found nothing. The question then is whether the new Independent Commission for Aid Impact (ICAI) will address this gap – see more on this here]

UK Independent Commission for Aid Impact – Work Plan

Independent Commission for Aid Impact – Work Plan, and the associated Press Release (12 May 2011)

1. This document introduces the Independent Commission for Aid Impact’s first work plan, setting out the reports we envisage initiating over the next three years, from May 2011 to May 2014.

2. Our mandate permits us to examine all UK Government programmes
funded by Official Development Assistance expenditure. In 20091, this
represented £7.4bn, which was spent through bilateral, joint and
multilateral processes by the Department for International Development (DFID) and at least eight other branches of Government. Under the Government’s current plans and guided by its recent reviews of bilateral, multilateral and humanitarian work, this expenditure is due to rise significantly and will change in focus. This range of projects and programmes gives us significant discretion in choosing where to focus the attention of our reports.” ..continues..

See also: FRAMEWORK AGREEMENT BETWEEN THE DEPARTMENT FOR INTERNATIONAL DEVELOPMENT (DFID) AND THE INDEPENDENT COMMISSION FOR AID IMPACT (ICAI) This document sets out the broad framework within which the ICAI will operate as a permanent body (12 May 2011 – 11 May 2015). The Agreement is signed by the Chief Commissioner of the ICAI and DFID. This document, and any future revisions, will be made public on the ICAI website.

[RD Comments: The workplan has three strands of work:

  • Evaluations: are likely to focus on the sustainable development impact achieved by programmes against initial or updated objectives
  • Value for money reviews: will consider whether objectives have been achieved with the optimal use of resources
  • Investigations: could range from general fact-finding in response to external requests, to assessments of compliance with legal and policy responsibilities and examinations of alleged corruption cases.

Regarding the first strand, the OECD DAC definition of impact is “Positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended.” One practical way of defining long term would be as any change observed after the completion of a project (typically 3 years). This would seem to be an appropriate focus for the ICAI because in the past DFID has undertaken very few ex-post evaluations. There is a gap here that needs to be addressed, as there is with quite a few other bilateral agencies.

A further justification lies in the useful connection with value for money reviews. Some organisations, like Global Environmental Facility, define impact as “A fundamental and durable change in the condition of people and their environment brought about by the project”  in other words, a sustained change.  The longer a change is sustained (all other things being equal) the more value for money would seem to have been realised. Assessing impact in the short term (i.e. during the project implementation period) risks understating impact and the associated value for money.

The question that then arises in my mind is to what extent will the ICAI program of evaluations be focused on projects that have been completed, versus those which are still being implemented. I will be asking the ICAI if they could  provide an answer, for example in the form of the percentage of completed versus incompleted projects to be examined in each of the 8 evaluations to be undertaken in year 1]

PS 24 May 2011: See also Howard White’s related question about the ICAI’s use of ex-post and ex-ante evaluations. There he seems to be arguing against ex-post evaluations: “There is a question as to whether the commission restricts itself to ex-post evaluations, done once the intervention is being implemented or completed. Or can it engage in ex-ante designs before the intervention has started? Designing the evaluation prior to the launch of a programme, and collecting baseline data, generally delivers more robust findings.”

This seems like the method tail wagging the programme development dog. Or, looking for a lost wallet under a lamppost.The potential for rigour should not determine what gets evaluated. What gets evaluated should be decided by more strategic considerations. Like the fact that we know very little about the long term effects of most development projects (where long term = after the project interevention ceases).

NAO report: DFID Financial Management Report

NAO Press Release 6 April 2011…

“Sound financial management will be essential at the Department for International Development as its spending increases by a third over the next four years, according to the National Audit Office.

The Department has improved its core financial management and has an ambitious programme underway to improve its focus on value for money. It has put important building blocks in place; however its financial management is not yet mature.   The Department cannot yet assess important aspects of the value for money of the aid it has delivered, at an aggregated level.

The Department’s programme budget will grow by £3.3 billion from 2010-11 to 2014-15 (34 per cent in real terms). At the same time, its administration budget is going to reduce by a third. The Department will face significant financial and operational challenges, making sound financial management essential.

The Department has increased the number of finance professionals it employs, but this expertise needs to be used more effectively across the business. In addition, new financial information systems do not yet provide the data needed to support well-founded decisions and forecasts are still an area of weakness.

Having conducted a thorough review, the Department now has a high level plan allocating its resources on the basis of the results it aims to achieve.  Along with actions to strengthen measurement of aid projects, this has the potential to help strengthen the focus on aid results and value for money. But key risks need to be managed and the Department should now develop a coherent, single strategy for doing so.

With greater spending in higher risk locations and more fragile states, the Department must do more to assure itself that it minimises fraud and corruption risks. Although the level of reported fraud is low, it is likely to be under-reported. The NAO has found that the investigation of fraud is reactive and the Department does not attempt to quantify its estimated likely fraud losses.

Amyas Morse, head of the National Audit Office, said today:

“The Department knows its increase in funding, and new approach to aiding developing countries, brings challenges. This report shows considerable progress is being made, but a better information environment is needed to deal with the heightened levels of assurance required in targeting future aid at higher risk locations”

[RD comment] The Executive Summary ends with a section titled: Conclusion on value for money, which says:

  • We recognise that the Department has been improving its core financial management and has also been strengthening its focus on value for money at all levels of the organisation, including through a step change in its approach to the strategic allocation of resources based on expected results. Important building blocks have been put in place, but key gaps in financial management maturity remain. The changes the Department has introduced to-date are positive, and provide a platform to address the challenges that will come with its increased spending.
  • At present, however, the Department’s financial management is not mature. The  Department’s forecasting remains inaccurate and its risk management is not yet fully embedded. Weaknesses in the measurement of value for money at project level, variability in the quality and coverage of data, and lack of integration in core systems, mean that the Department cannot assess important aspects of value for money of the aid it has delivered, at an aggregated level. The Department now needs to develop a coherent single strategy to address the weaknesses identified and the key risks to meeting its objectives.

Value for money: A list

Hopefully, the start of a short but useful bibliography, listed in chronological order.

Please suggest additional documents by using the Comment facility below.  If you have ideas on how Value for Money can be clearly defined and usefully measured please also use the Comment facility below..

For the Editor’s own suggestion, go to the bottom of this page






  • ICAI’s Approach to Effectiveness and Value for Money, November 2011. See also Rick Davies comments on same
  • Value for Money and international development: Deconstructing some myths to promote more constructive discussion. OECD Consultation Draft. October 2011
  • What does ‘value for money’ really mean? CAFOD, October 2011
  • Value for Money: Guideline, NZAID, updated July 2011
  • DFID’s Approach to Value for Money (VfM), July 2011
  • DFID Briefing Note: Indicators and VFM in Governance Programming July 2011.  INTRODUCTION: This note provides advice to DFID staff on: i. governance indicator best practice, and ii. measuring the Value for Money of governance programmes. This note is for use primarily by DFID governance advisers, as well as other DFID staff designing programmes with governance elements. The note provides a framework for consideration in Business Case design that relates to governance activity.  On Value for Money (VFM) in particular, this guidance is only intended as ‘interim’ whilst further research is undertaken. During 2011-2012, DFID will work to determine best practice and establish agreed approaches and mechanisms. This guidance will therefore be updated accordingly subject to research findings as they are made available.  This note was drawn up by DFID staff. It builds on 2 research reports by ITAD, submitted in December 2010 and January 2011 respectively, as well as DFID’s internal Business Case guidance. There are 2 main sections: Section 1: Governance Indicators and Section 2: Value for Money in Governance Programming. The note ends with 10 Top Tips on Business Case preparation.
  • DFID is developing ” Guidance for DFID country offices on maximising VfM in cash transfer programmes“. July 2011. Objective:To provide guidance to DFID country offices on measuring value for money in cash transfer programmes through the rigorous analysis of costs and benefits, as far as possible, at the design stage and through programme implementation and completion.  This project is driven by DFID’s expansion of support to cash transfer programmes, its strong emphasis on ensuring programmes are delivering value for money, and strong country office demand for specific advice and guidance” (ToRs)
  • Value for Money: Current Approaches and Evolving Debates. Antinoja Emmi, Eskiocak Ozlem, Kjennerud Maja, Rozenkopf Ilan,  Schatz Florian, LSE, London, May 2011. 43 pages. “NGOs have increasingly been asked by donors to demonstrate their Value for Money (VfM).This report analyses this demand across a number of dimensions and intends to lay out the interpretation of different stakeholders. After contextualising the debate internationally and nationally, a conceptual discussion of possible ways of defining and measuring VfM is conducted, followed by a technical analysis of different approaches and measurement techniques adopted by stakeholders. Finally, opportunities and caveats of measuring VfM are discussed. The report draws heavily on information gained through a total of seventeen interviews with representatives of NGOs, consultancies, think tanks and academic institutions.”
  • Independent Commission for Aid Impact – Work Plan, May 2011: “We have not yet agreed our own definition of terms such as “value for money” and “aid effectiveness”. These are complex issues which are currently under much debate. In the case of value for money we believe that this should include long-term impact and effectiveness. We intend to commission our contractor to help us in our consideration of these matters.”
  • The Guardian, Madeleine Bunting,11th April 2011 “Value for money is not compatible with increasing aid to ‘fragile states’. The two big ideas from the UK’s Department for International Development are destined for collision”
  • NAO report on DFID Financial Management, April 2011. See the concluding section of the Executive Summary, titled Conclusion on value for money:
    • “We recognise that the Department has been improving its core financial management and has also been strengthening its focus on value for money at all levels of the organisation, including through a step change in its approach to the strategic allocation of resources based on expected results. Important building blocks have been put in place, but key gaps in financial management maturity remain. The changes the Department has introduced to-date are positive, and provide a platform to address the challenges that will come with its increased spending.”
    • At present, however, the Department’s financial management is not mature. The Department’s forecasting remains inaccurate and its risk management is not yet fully embedded. Weaknesses in the measurement of value for money at project level, variability in the quality and coverage of data, and lack of integration in core systems, mean that the Department cannot assess important aspects of value for money of the aid it has delivered, at an aggregated level. The Department now needs to develop a coherent single strategy to address the weaknesses identified and the key risks to meeting its objectives.”
  • DFID’s March 2011, Multilateral Aid Review, “was commissioned to assess the value for money for UK aid of funding through multilateral organisations”. “All were assessed against the same set of criteria, interpreted flexibly to fit with their different circumstances, but always grounded in the best available evidence. Together the criteria capture the value for money for UK aid of the whole of each organisation. The methodology was independently validated and quality assured by two of the UK’s leading development experts. The assessment framework included criteria which relate directly to the focus and impact of an organisation on the UK’s development and humanitarian objectives– such as whether or not they are playing a critical role in line with their mandate, what this means in terms of results achieved on the ground, their focus on girls and women, their ability to work in fragile states, their attention to climate change and environmental sustainability, and their focus on poor countries. These criteria were grouped together into an index called “Contribution to UK development objectives.  The framework also included criteria which relate to the organisations’ behaviours and values that will drive the very best performance – such as transparency, whether or not cost and value consciousness and ambition for results are driving forces in the organisation, whether there are sound management and accountability systems, whether the organisations work well in partnership with others and whether or not financial resource management systems and instruments help to maximise impact. These were grouped together into an index called “Organisational strengths”. Value for money for UK aid was assessed on the basis of performance against both indices. So, for example, organisations with a strong overall performance against both indices were judged to offer very good value for money for UK aid, while those with a weak or unsatisfactory performance against both indices were deemed to offer poor value for money.”
    • [RD comment] In the methodology chapter the authors explain / claim that this approach is based on a 3E view that seeks to give attention to the whole “value for money chain” (nee causal chain), from inputs to impacts (which is discussed below). Reading the rest of that chapter, I am not convinced, I think the connection is tenuous, and what exists here is a new interpretation of Value for Money that will not be widely used. That said, I dont envy the task the authors of this report were faced with.
    • [RD comment]The Bilateral Aid Review makes copious references to Value for Money, but there is no substantive discussion of what it means anywhere in the review. Annex D includes a proposal format which includes a section for providing  Value for Money information in 200 words. This includes the following fields, which are presumably explained elsewhere: Qualitative judgement of vfm, vfm metrics (including cost-benefit measures), Unit costs, Scalability, Comparators, Overall VfM RAG rating: red/amber/green.
  • Aid effectiveness and value for money aid: complementary or divergent agendas as we head towards HLF-4. (March 2011)  This ODI, ActionAid and UK Aid Network public event was called “to reflect on approaches dominating the debate in advance of the OECD’s 4th High Level Forum on Aid Effectiveness (HLF-4); explore the degree to which they represent complimentary or divergent agendas; and discuss how they might combine to help ensure that HLF-4 is a turning point in the future impact of aid.” The presentations of three of the four speakers are available on this site. Unfortunately DFID’s presentation, by Liz Ditchburn– Director, Value for Money, DFID, is not available.
  • BOND Value for Money event (3 February 2011). “Bond hosted a half day workshop to explore this issue in more depth. This was an opportunity to take stock of the debates on Value for Money in the sector, to hear from organisations that have trialled approaches to Value for Money and to learn more about DFID’s interpretation of Value for Money from both technical and policy perspectives.” Presentations were made by (and are available): Oxfam, VSO, WaterAid, HIV/AIDS Aliliance, and DFID (Jo Abbot, Deputy Head Civil Society Department). There was also a prior BOND event in January 2011 on Value for Money, and presentations are also available, including an undated National Audit Office Analytical framework for assessing Value for Money
    • [RD Comment]The DFID presentation on “Value for Money and Civil Society”  is notable in the ways that it seeks to discourage NGOs from over investing efforts to measure Value for Money, and its emphasises on the continuity of DFIDs approach to assessing CSO proposals. The explanation of Value for Money is brief, captured in two statements: “optimal use of resources to get desired outcomes” and “maximum benefit for the resources requested”. To me this reads as efficiency and cost-effectiveness.
  • The Independent Commission for Aid Impact (ICAI)’s January 2011online consultation contrasts Value for Money reviews with Evaluations, Reviews and Investigations, as follows.
    • Value for money reviews: judgements on whether value for money has been secured in the area under examination. Value for money reviews will focus on the use of resources for development interventions.
    • Evaluations: the systematic and objective assessment of an on-going or complete development intervention, its design, implementation and results. Evaluations will focus on the outcome of development interventions.
    • Reviews: assessments of the performance of an intervention, periodically or on an ad hoc basis. Reviews tend to look at operational aspects and focus on the effectiveness of the processes used for development interventions.
    • Investigations:a formal inquiry focusing on issues around fraud and corruption.
      • [RD comment] The ICAI seems to take a narrower view than the National Audit Office, focusing on economy and efficiency and leaving out effectiveness – which within its perspective would be covered by evaluations.



  • Measuring the Impact and Value for Money of Governance & Conflict Programmes Final Report December 2010 by Chris Barnett, Julian Barr, Angela Christie,  Belinda Duff, and Shaun Hext. “The specific objective stated for our work on value for money (VFM) in the Terms of Reference was: “To set out how value for money can best be measured in governance and conflict programming, and whether the suggested indicators have a role in this or not”. This objective was taken to involve three core tasks: first, developing a value for money approach that applies to both the full spectrum of governance programmes, and those programmes undertaken in conflict-affected and failed or failing states; second, that the role of a set of suggested indicators should be explored and examined for their utility in this approach, and, further, that existing value for money frameworks (such as the National Audit Office’s use of the 3Es of ‘economy, efficiency and effectiveness’) should be incorporated, as outlined in the Terms of Reference.”
  • Value for Money: How are other donors approaching ‘value for money’ in their aid programming? Question and answer on the Governance and Social Development Resource Centre Help Desk, 17 September 2010.
  • Value for Money (VfM) in International Development NEF Consulting Discussion Paper, September 2010. Some selective quotes: “While the HM Treasury Guidance provides principles for VfM assessments, there is currently limited guidance on how to operationalise these in the international development sector or public sector more generally. This has led to confusion about how VfM assessments should be carried out and seen the proliferation of a number of different approaches.” …”The HM Treasury guidance should inform the VfM framework of any publicly-funded NGO in the development sector. The dark blue arrow in Figure 1 shows the key relationship that needs to be assessed to determine VfM. In short, this defines VfM as: VfM = value of positive + negative outcomes / investment (or cost)”
  • [RD Comment:] Well now, having that formula makes it so much easier (not), all we have to do is find the top values, add them up, then divide by the bottom value :-(
  • What is Value for Money? (July 2010) by the Improvement Network (Audit Commission, Chartered Institute of Public Finance and Accountancy (CIPFA), Improvement and Development Agency (IDeA), Leadership Centre for Local Government, NHS Institute for Innovation and Improvement).  “VfM is about achieving the right local balance between economy, efficiency and effectiveness, the 3Es – spending less, spending well and spending wisely” These three attributes are each related to different stages of aid delivery, from inputs to outcomes, via this diagram.
  • [RD comment]: Reading this useful page raises two interesting questions. Firstly, how does this framework relate to the OECD/DAC evaluation criteria? Is it displacing them, as far as DFID is concerned? It appears so, given its appearance in the Terms of Reference for the contractors who will do the evaluation work for the new Independent Commission for Aid Impact. Ironically, the Improvement Network makes the following comments about the third E, (effectiveness) which suggests that the DAC criteria may be re-emerging within this new framework: “Outcomes should be equitable across communities, so effectiveness measures should include aspects of equity, as well as quality. Sustainability is also an increasingly important aspect of effectiveness.” The second interesting question is how Value for Money is measured in aggregate, taking into account all three Es. Part of the challenge is with effectiveness, where it is noted that effectivenessis a measure of the impact that has been achieved, which can be either quantitative or qualitative.” Then there is the notion that Value for Money is about a “balance” of the three Es. “VfM is high when there is an optimum balance between all three elements – when costs are relatively low, productivity is high and successful outcomes have been achieved.” On the route to that heaven there are multiple possible combinations of states of economy (+,-), efficiency (+,-) and effectiveness (+,-). There is no one desired route or ranking. Because of these difficulties Sod’s Law will probably apply and attention will focus on what is easiest to measure i.e. economy or at the most, efficiency. This approach seems to be evident in earlier government statements about DFID: “International Development Minister Gareth Thomas yesterday called for a push on value for money in the UN system with a target of 25% efficiency savings.”….”The UK is holding to its aid commitments of 0.7 % of GNI.  But for the past five years we have been expected to cut 5% from our administration or staffing costs across Government. 5% – year on year”






The Editor’s suggestion

1. Dont seek to create an absolute measure of the Value for Money for a single activity/project/program/intervention

2. Instead, create a relative measure of  the VfM found within a portfolio of activities, by using a rank correlation. [This measure then be used to compare VfM across different types of portfolios]

  • 1. Rank the entities (activities/projects…) by cost of the inputs, and
    • Be transparent about which costs were included/excluded e.g partner’s own costs, other donor contributions etc,)
  • 2. Rank the the same set of entities by their perceived effectiveness or impact (depending on the time span of interest)
    • Ideally this ranking would be done through a participatory ranking process (see Refs below), and information would be available on the stakeholders who were involved
    • Where multiple stakeholder groups were consulted, any aggregation of their rankings would be done using transparent weighting values and information would also be available on the Standard Deviation of the rankings given to the different entities. There is likely to be more agreement across stakeholders on some rankings than others.
    • Supplementary information would be available detailing how stakeholders explained their ranking. This is best elicited through pair comparisons of  adjacent sets of ranked entities.
      • That explanation is likely to include a mix of:
        • some kinds of impacts being more valued by the stakeholders than others, and
        • for a given type of impact there being evidence of more rather than less of that kind of impact, and
        • where a given impact is on the same scale, there being better evidence of that impact
  • 3. Calculate the rank correlation between the two sets of rankings. The results will range between these two extremities:
    • A high positive correlation (e.g. +0.90): here the highest impact is associated with the highest cost ranking, and the lowest impact is associated with the lowest cost ranking. Results are proportionate to investments. This would be the more preferred finding, compared to
    • A high negative correlation (e.g -0.90): here the highest impact is associated with lowest cost ranking, but the lowest impact is associated with the highest cost ranking. Here the more you increase your investment the less you gain, This is the worst possible outcome.
    • In between will be correlations closer to zero, where there is no evident relationship between cost and impact ranking.
  • 4. Opportunities for improvement would be found by doing case studies of “outliers”, found when the two rankings are plotted against each other in a graph. Specifically:
    • Positive cases, whose rank position on cost is conspicuosly lower than their rank position on impact.
    • Negative cases, whose rank position on impact is conspicuosly lower than their rank position on cost.

PS: It would be important to  disclose the number of entities that have been ranked. The more entities there are being ranked the more precise the rank correlation will be. However, the more entities there are to rank the harder it will be for participants and the more likely they will use tied ranks. A minimum of seven rankable entities would seem desirable.

For more on participatory ranking methods see:

PS: There is a UNISTAT plugin for Excel that will produce rank correlations, plus much more.

The future of UK aid: Changing lives, delivering results: our plans to help the world’s poorest people

The results of two DFID reviews made public on 1st March 2011, and available on the DFID website

See also:

%d bloggers like this: