MPs report on Department for International Development Financial Management

The Commons Public Accounts Committee publishes its 52nd report of Session 2010-12, on the basis of evidence from the Department for International Development (DfID).

“The Rt Hon Margaret Hodge MP, Chair of the Committee of Public Accounts, said:

“The amount DfID spends on aid will rise by 35% by 2013, but at the same time the Department has to cut its overall running costs by a third.
Achieving this level of savings at a time of rapid expansion in frontline services involve a substantial challenge if taxpayers’ money is to be properly protected and value for money secured. [emphasis added]

The Department is going to be spending more in fragile and conflict-affected countries and the danger to the taxpayer is that there could be an increase in fraud and corruption. However, the Department could not even give us information as to the expected levels of fraud and corruption and the action they were taking to mitigate it.

Unfortunately, the Department has not always kept its eye on the financial ball, and in 2010 stopped monitoring its finance plan. That must not happen again and DFID should report publicly on its financial management.

The Department’s ability to make informed spending decisions is undermined by its poor understanding of levels of fraud and corruption. Its current approach is too reactive and it needs to develop a sound framework for making sure funds are spent properly on the ground. This will be even more important as the Department channels more of its funding into fragile and conflict-affected states.

The Department’s current plan is to spend more via multilateral organizations and less through bilateral programmes. This poses a risk to value for money because the Department will have less oversight than it does over country-to-country programmes. Indeed, we are concerned that the strategy has more to do with the fact that it is easier to spend through multilaterals than go through the process of assessing value for money of bilateral programmes. [emphasis added]

To maximise the amount of aid that gets through to the frontline, DfID should have clear plans for how it is going to reduce or control running costs – particularly when channelling funding through partner and multilateral organizations with a management overhead at every stage.”[emphasis added]

Margaret Hodge was speaking as the committee published its 52nd Report of this Session which, on the basis of evidence from the Department for International Development, examined its financial management capability, its increasing focus on value for money, and the challenges it faces in managing its increasing programme budget while reducing its overall running costs.”

RD Comment: See Rick on the Road blog posting Thursday, July 24, 2008: An aid bubble? – Interpreting aid trends which raises the same issues as highlighted in bold above.

See also HoC International Development Committee, Committee Room 15 Working effectively in fragile and conflict-affected states: DRC, Rwanda and Burundi

Monitoring Policy Dialogue: Lessons From A Pilot Study

By Sadie Watson And Juliet Pierce. September 2008. DEPARTMENT FOR INTERNATIONAL DEVELOPMENT. Evaluation Report WP27

Executive Summary

In 2007, a tool and process was developed for improving the recording and impact of policy dialogue initiatives across DFID. It was based on an adaptation of current project cycle management (PCM) requirements for programme spending. A pilot was devised to test the proposed tool and process in terms of:

• Assessing the value in recording and monitoring policy related activities in a similar way to that of spend activities;

• Finding the most effective and useful approach in terms of process;

• Identifying succinct ways to capture intentions and to measuring performance;

• Clarifying the type and level of support and guidance required to roll the process out across DFID.

The ten participating pilot teams represented different aspects of DFID’s policy work, conducting different types of policy dialogue activities. The consultants were asked to monitor and evaluate the six month pilot. They were also asked to review approaches to managing and monitoring policy dialogue and influencing activities in other organisations. This report highlights some lessons and observations from the pilot. It outlines some emerging issues and provides some pointers for DFID to consider as it continues to develop into an organisation where policy dialogue and influencing are increasingly important aid tools.
Continue reading “Monitoring Policy Dialogue: Lessons From A Pilot Study”

Measuring National Well-being: Measuring What Matters

[UK] National Statistician’s Reflections on the National Debate on Measuring National Well-being July 2011

Foreword
Introduction
Chapter 1: What is national well-being? … 4
Chapter 2: Why measure national well-being and who will use the measures? . 9
Chapter 3: Measuring national well-being…..15
Chapter 4: Partnerships and next steps..20
References. 24
Notes. 25

Foreword
“On 25 November 2010, I accepted an invitation from the Prime Minister, David Cameron, to develop measures of national well-being and progress. I am convinced that this is something that can only be done with an understanding of what matters most to people in this country.

In response to this invitation, the Office for National Statistics (ONS) undertook a national debate on ‘what matters to you?’ between 26 November 2010 and 15 April 2011. I was impressed by the number of people who were willing to take part in discussions and also by the depth of responses. In total, ONS held 175 events, involving around 7,250 people. In total the debate generated 34,000 responses, some of which were from organisations and groups representing thousands more. The quotes on each page of this report were taken from online contributions, where permission was given to reproduce the participant’s words anonymously. I am grateful to everyone who took the time to take part in the debate, and to those who organised and hosted events.

The debate has helped us identify the key areas that matter most and will help to ensure that the measures we use will be relevant not only to government but also to the wider public. This is crucial to allow for effective development and appraisal of policy for individuals to use information to identify ways of improving well-being, and to allow for assessment of how society is doing overall.

The term ‘well-being’ is often taken to mean ‘happiness’. Happiness is one aspect of the well-being of individuals and can be measured by asking them about their feelings – subjective well-being. As we define it, well-being includes both subjective and objective measures. It includes feelings of happiness and other aspects of subjective well-being, such as feeling that one’s activities are worthwhile, or being satisfied with family relationships. It also includes aspects of well-being which can be measured by more objective approaches, such as life expectancy and educational achievements. These issues can also be looked at for population groups – within a local area, or region, or the UK as a whole.
Developing better measures of well-being and progress is a common international goal and the UK is working with international partners to develop measures that will paint a fuller picture of our societies. This is a long-term programme and I am committed to sharing our ideas and proposals widely. This will help to ensure that UK well-being measures are relevant and founded on what matters to people, both as individuals and for the UK as a whole as well as being reliable and impartial and serving to improve our understanding of UK society.

This report summarises the contributions made to the debate and explains how ONS is using the findings to develop measures of national well-being. I look forward to your further comments and advice in response to this report. These should be sent to nationalwell-being@ons.gov.uk.”
Jil Matheson
National Statistician

See more on the ONS website

Released: Australian Government’s response to the Independent Review of Aid Effectiveness

The ‘Independent Review of Aid Effectiveness’ and the Government’s response were released on 6 July 2011 by Foreign Minister Kevin Rudd, in an official launch at Parliament House, followed by a Ministerial Statement to Parliament. For an overview, see this page on the AusAID website

Independent Review of Aid Effectiveness:

Commissioned in November 2010, this was the first independent review of the aid program in 15 years. It made 39 recommendations to improve the program

Australian Government response:

The Government  has agreed (or agreed in principle) to 38 of the recommendations. Including that the agency develop a three-tiered results framework for reporting on agency-wide performance.

See also

RD Comment: The following section on Independent Evaluation is of particular interest [underlining added]:

ii) Independent Evaluations

“AusAID’s Independent Completion Reports and Independent Progress Reports are another key part of its Performance Management and Evaluation Policy.

Under current guidelines, a report must be completed for an activity every four years, either during its implementation (a progress report) or at completion (a completion report). Reports are required for projects above $3 million and are meant to be made public. They are independent in that they are done by individuals not involved in the project. Typically, but not always, they are written by non–AusAID staff.

By international standards, this policy is thorough. For example, at the World Bank, independent completion reports are done only for a sample of projects

But a study of AusAID evaluation reports commissioned by the Review Panel found that implementation of AusAID’s evaluation policy is patchy:
• Of 547 projects that should have had a completion or progress report in 2006–10, only 170 were recorded as having been done.
• Of the 170, only 118 could be found.
• About 26 per cent of the completion and progress reports were assessed to be too low quality to publish.
Only about 20 have been published on the AusAID website.

Clearly, the policy is not being fully followed. Other problems were also evident. None of the 118 completion or progress reports reviewed provided an unsatisfactory rating. This raises questions of credibility. In comparison, 20 per cent of World Bank projects are rated unsatisfactory by its independent evaluation group.

There is also a structural issue with the policy: AusAID program managers must approve the publication of an independent report. This risks conflicts of interest and long delays in publication. The low rate of publication suggests these problems may be occurring.

Independent completion reports, when done and published, can be very useful. For example, the completion report on the first phase of the Indonesia Basic Education Project is in the public domain and helped to inform recent public debate about the second phase of the project (AusAID 2010b). In contrast, several useful completion reports have recently been done for the PNG program, but only one has been released.

Given the problems described above, it is not surprising that the Review Panel has seen little evidence that these reports inform and improve aid delivery.

Cost-Benefit Analysis in World Bank Projects

by Andrew Warner, Independent Evaluation Group, June 2010. Available as pdf

Cost-benefit analysis used to be one of the World Bank?’s signature issues. It helped establish its reputation as the knowledge Bank and served to demonstrate its commitment to measuring results and ensuring accountability to taxpayers. It was the Bank’s answer to the results agenda long before that term became popular. This report takes stock of what has happened to costbenefit analysis at the Bank, based on analysis of four decades of project data, project appraisal and completion reports from recent fiscal years, and interviews with current Bank staff. The percentage of projects that are justified by cost-benefit analysis has been declining for several decades, due to both a decline in standards and difficulty in applying cost-benefit analysis. Where cost-benefit analysis is applied to justify projects, there are examples of excellent analysis but also examples of a lack of attention to fundamental analytical issues such as the public sector rationale and comparison of the chosen project against alternatives. Cost-benefit analysis of completed projects is hampered by the failure to collect relevant data, particularly for low-performing projects. The Bank’s use of cost-benefit analysis for decisions is limited because the analysis is usually prepared after making the decision to proceed with the project.

This study draws two broad conclusions. First, the Bank needs to revisit the policy for costbenefit analysis in a way that recognizes legitimate difficulties in quantifying benefits while preserving a high degree of rigor in justifying projects. Second, it needs to ensure that when costbenefit analysis is done it is done with quality, rigor, and objectivity, as poor data and analysis misinform, and do not improve results. Reforms are required to project appraisal procedures to ensure objectivity, improve both the analysis and the use of evidence at appraisal, and ensure effective use of cost-benefit analysis in decision-making.

Tools and Methods for Evaluating the Efficiency of Development Interventions

Palenberg, M. (2011),  Evaluation Working Papers. Bonn: Bundesministe-rium für wirtschaftliche Zusammenarbeit und Entwicklung. Available as pdf.

Foreword:

Previous BMZ Evaluation Working Papers have focused on measuring impact. The present paper explores approaches for assessing efficiency. Efficiency is a powerful concept for decision making and ex-post assessments of development interventions but, nevertheless, often treated rather superficially in project appraisal, project completion and evaluation reports. Assessing efficiency is not an easy task but with potential for improvements, as the report shows. Starting with definitions and the theoretical foundations the author proposes a three level classification related to the analytical power of efficiency analysis methods. Based on an extensive literature review and a broad range of interviews, the report identifies and describes 15 distinct methods and explains how they can be used to assess efficiency. It concludes with an overall assessment of the methods described and with recommendations for their application and further development.

Synthesis Study of DFID’s Strategic Evaluations 2005 – 2010

 

A report produced for the Independent Commission for Aid Impact
by Roger Drew, January 2011. Available as pdf.

Summary

S1. This report examined central evaluations of DFID’s work published from 2006 to 2010. This included:
– 41 reports of the International Development Committee (IDC)
– Two Development Assistance Committee (DAC) peer reviews
– 10 National Audit Office (NAO) reports
– 63 reports of evaluations from DFID’s Evaluation Department (EVD)

S2. These evaluations consisted of various types:
– Studies of DFID’s work overall (16%)
– Studies with a geographic focus (46%)
– Studies of themes or sectors (19%)
– Studies of how aid is delivered (19%) (see Figure 1)

S3. During this period, DFID’s business model involved allocating funds through divisional programmes. Analysis of these evaluation studies according to this business model shows that:
– Across regional divisions, the amount of money covered per study varied from £63 million in Europe and Central Asia to £427 million in East and Central Africa.
– Across non-regional divisions, the amount of money covered per study varied from £84 million in Policy Division to £5,305 million in Europe and Donor Relations (see Figure 2).

S4. Part of the explanation of these differences is that the evaluations studied form only part of the overall scrutiny of DFID’s work. In particular, its policy on evaluation commits DFID to rely on the evaluation systems of partner multilateral organisations for assessment of the effectiveness and efficiency of multilateral aid. No central reviews of data generated through those systems were included in the documents reviewed for this study. The impact of DFID’s Bilateral and Multilateral Aid Reviews was not considered, as the Reviews had not been completed by the time this study was undertaken.

S5. The evaluations reviewed had a strong focus on DFID’s bilateral aid programmes at country level. There was a good match overall between the frequency of studying countries and the amount of DFID bilateral aid received (see Table 4). Despite the growing focus on fragile states, such countries were still less likely to be studied than non-fragile countries. Countries that received large amounts of DFID bilateral aid not evaluated in the last five years included Tanzania, Iraq and Somalia (see Table 5). Regional programmes in Africa also received large amounts of DFID bilateral aid but were not centrally evaluated. Country programme evaluations did not consider DFID’s multilateral aid specifically. None of the evaluations reviewed considered why the distribution of DFID’s multilateral aid by country differs so significantly from its bilateral aid. For example, Turkey is the single largest recipient of DFID multilateral aid but receives almost nothing bilaterally (see Table 7).

S6. The evaluations reviewed covered a wide range of thematic, sectoral and policy issues (see Figure 3). These evaluations were, however, largely standalone exercises rather than drawing either retrospectively on data gathered in other evaluations or prospectively including questions into proposed evaluations. More use could have been made of syntheses of country programme evaluations for this purpose.

S7. The evaluations explored in detail the delivery of DFID’s bilateral aid and issues of how aid could be delivered more effectively. The evaluations covered the provision of multilateral aid in much less detail (see paragraph S4). One area not covered in the evaluations is the increasing use of multilateral organisations to deliver bilateral aid programmes. This more than trebled from £389 million in 2005/6 to £1.3 billion in 2009/10 and, by 2009/10, was more than double the amount being provided as financial aid through both general and sectoral budget support combined.

[RD comment:  I had the impression that DFID, like many bilateral donors, does very few ex-post evaluations, so I wanted to find out how correct this view was. I searched for “ex-post” and found nothing. The question then is whether the new Independent Commission for Aid Impact (ICAI) will address this gap – see more on this here]

%d bloggers like this: