NAO Review – DFID: Progress in improving performance management

Publication date: 12 May 2009 . Full report (PDF – 366KB)

Executive Summary

1.  This brief review of the Department for International Development’s (DFID) performance management arrangements during 2008 is a follow-up to our 2002 VFM report on the same topic. It responds to a request from DFID’s Accounting Officer to re-visit the topic periodically, which the C&AG agreed would be valuable. It is based on a desk review of main documents, interviews with DFID staff, and a survey of staff and stakeholder views about evaluation (Appendix 3). We did not undertake a full audit of DFID systems, some of which are in the process of being revised, and we concentrated on those areas of DFID activity most directly related to its performance targets. We drew on recent DFID reviews of monitoring and evaluation, and our findings square well with the results of those reviews.

2.  DFID has responded well to our 2002 recommendations, improving the coverage and specification of its PSA indicators, streamlining the planning and performance monitoring arrangements, instituting better Board review procedures, enhancing the performance review of multilateral funding, and adding to the scale and independence of evaluation work. Perhaps most strikingly, all the staff we spoke to in DFID were fully aware of the Millennium Development Goals (MDGs) and associated Public Service Agreement targets (PSAs), and teams used the associated indicators to structure their debates about priorities and performance – in marked contrast to the findings from our earlier study.

3.  Challenges to effective performance management, however, remain. The prime challenge is in securing sufficient reliable, timely data on poverty reduction outcomes and service delivery outputs to underpin aid targeting and performance analysis. Although DFID is the prime bilateral supporter of developing country statistical capacity, progress in securing better, more frequent poverty-related  data has been slow. DFID has been unable, for example, to set a credible baseline for income poverty in 5 of its 22 PSA countries, and has inadequate trend information in a further 4. An increasing DFID focus on fragile states exacerbates this problem. There are similar gaps in output data – data which are crucial to support performance management and value for money judgements in an area where outcomes can take many years to emerge, and may not be measured even then. Current DFID work to develop standard output indicators is welcome, if limited.

4.  A related issue concerns the challenge of associating inputs with outputs and outcomes. When much bilateral aid was in the form of discrete projects, DFID could associate its own inputs with project outputs and probably intermediate outcomes as well. With increasing use of programmatic aid, those links are problematic in principle, and in practice subject to serious weaknesses in developing country statistical systems. DFID has developed a set of standard indicators and is looking to strengthen economic appraisals to help inform project approval decisions and develop cost-effectiveness benchmarks. It has created a central unit charged with promoting and assessing value for money. And it has revised and strengthened its guidance on the creation of monitoring frameworks, requiring explicit baselines and a “pro-rating” of DFID’s share of benefits according to its share of inputs.  The challenge will be for DFID to maintain the momentum of these initiatives with decreasing staff resources but increasing programme spend.

5.  There are two further areas where, given the significance DFID attaches to them, measurement practice is not sufficiently advanced. First, measurement of the organisational capacity of development partners is insufficiently structured or scaled, given that so much DFID aid incorporates explicit capacity building elements. While there are a number of ratings and assessments that are used in specific sectors, and some overall “government effectiveness” indicators, DFID has no corporate approach to measuring capacity built or sustained. Second, it continues to struggle to measure the impact of its “influencing” work.  Sorting out the critical influences on changes in policies or practices presents technical challenges. Initial DFID measurement trials took place six years ago, in the context of its work with multilateral organisations. And DFID has recently completed further pilots looking to apply approaches designed to measure projects. An evaluation of these pilots confirmed the value of DFID’s ‘logical framework’ approach, while noting some limitations. Performance measurement in this area is hindered by lack of good measurement of inputs: DFID has no staff time booking system that allows it to monitor the scale, cost or nature of resources committed to “influencing”.

6.  DFID has well-established systems for specifying aid objectives, and associated indicators, in a logical hierarchy, and for using those indicators in approving and monitoring its programmes. But its management of these systems has not been sufficiently firm. A number of our recent audit studies have commented on problems with PRISM, the former IT system used to store and collate project and programme data, as well as weaknesses in specific project or programme objectives. In this review the projects we examined had many different indicators of progress, often leading to subjective judgements of project progress. System incentives militate in favour of more, rather than less, favourable judgements. Recently, a DFID-commissioned monitoring and evaluation (M&E) audit found widespread problems with defective logical frameworks and weaknesses in management review. DFID has fixed some of the structural issues and is implementing a new integrated financial and project management system (ARIES). But lack of staff skills in M&E and weak in-the-line reviewremain real issues – staff told us, for example, of a lack of training in M&E. And they alsonoted that potential out-of the-line review bodies, such as the new Investment Committee, would need to address portfolio review and systemic issues by reference to a sample of actual programme proposals, if they were to help secure better compliance with DFID policies on programme appraisal and monitoring.

7.  DFID’s Evaluation Department (EvD) has increased in scale and stature since 2002, and the recent creation of an Independent Advisory Committee on Development Impact (IACDI) provides a further buttress to evaluation independence and quality. EvD remains less structurally independent than many other development agency evaluation units, however, and its recent programme has been slanted towards practice and strategy reviews, rather than focusing on DFID impact, or securing better information on aid cost-effectiveness. Work is in hand to refresh evaluation policy and strategy. Staff and stakeholder perceptions of DFID evaluation are favourable, albeit with clear scope for improvement. Evaluation is not, however, well integrated into general DFID performance management – it is not, for example, tasked with providing a better interpretation of DFID’s performance against PSAs or MDGs, and rarely set to work on cost-effectiveness topics. External stakeholders ranked evaluation’s potential contribution to accountability more highly than did DFID staff.

8.  In an area where outcomes often take many years to emerge, can be difficult to measure, and are subject to many influences other than DFID, programme implementation (and M&E in particular) has attracted less kudos than policy analysis, programme planning and fire fighting. DFID’s 2008 Results Action Plan is designed to address this situation, and is a high corporate priority which will be closely monitored by top management. Stronger management arrangements for performance review have now been put in place to drive the department-wide focus on results with more formal divisional performance review and challenge arrangements, as opposed to structuring scrutiny (and therefore perceptions of credit or recognition) around policy or planning papers.

See also a related report:
Assessing the quality of DFIDs Project Reviews March 2007

The assignment was commissioned to answer;
a. whether the quality of DFID project documentation (particularly reviews) is changing over time, and
b. whether scoring of reviews is consistent across the portfolio.
2. It is primarily a descriptive report of findings. In some cases comments are made, but the principal function is to identify what is taking place.

Comments?

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: