BEYOND SUCCESS STORIES: MONITORING & EVALUATION FOR FOREIGN ASSISTANCE RESULTS

EVALUATOR VIEWS OF CURRENT PRACTICE AND RECOMMENDATIONS FOR CHANGE.

This paper was produced independently by Richard Blue, Cynthia Clapp-Wincek, and Holly Benner. MAY 2009

See  Full-Report or Policy_Brief (draft for comment)

Findings derive from literature review, interviews with senior USG officials and primarily interviews with and survey responses from ‘external evaluators’—individuals who conduct evaluations of U.S. foreign assistance programs, either as part of consulting firms, non-governmental organizations (NGOs), or as individual consultants.   External evaluators were chosen because: 1) the authors are external evaluators themselves with prior USAID and State experience; 2) in recent years, the majority of evaluations completed of USG foreign assistance programs have been contracted out to external evaluation experts; and 3) evaluators are hired to investigate whether foreign assistance efforts worked, or didn’t work, and to ask why results were, or were not, achieved.  This gives them a unique perspective”

Key Findings – Monitoring

The role of monitoring is to determine the extent to which the expected outputs or outcomes of a program or activity are being achieved.  When done well, monitoring can be invaluable to project implementers and managers to make mid-course corrections to maximize project impact.  While monitoring requirements and practice vary across  U.S. agencies and departments, the following broad themes emerged from our research;

•  The role of monitoring in the USG foreign assistance community has changed dramatically in the last 15 years.  The role of USG staff has shifted to primarily monitoring contractors and grantees.  Because this distances USG staff from implementation of programs, it has resulted in the loss of dialogue, debate and learning within agencies.

•  The myriad of foreign assistance objectives requires a multiplicity of indicators. This has led to onerous reporting requirements that try to cover all bases.

•  There is an over reliance on quantitative indicators and outputs of deliverables over which the project implementers have control (such as number of people trained) rather than qualitative indicators and outcomes, expected changes in attitudes, knowledge, andbehaviors.

•  There is no standard guidance for monitoring foreign assistance programs—the requirements at MCC are very different  from those at DOS and USAID.  Some implementing agencies and offices have no guidance or standard procedures.

Key Findings – Evaluation

There is also great diversity in the evaluation policies and practices across USG agencies administering foreign assistance.  MCC has designed a very robust impact evaluation system for its country compacts, but these evaluations have yet to be completed. The Education and Cultural Affairs Bureau at the State Department has well respected evaluation efforts, but there is limited evaluation work in other bureaus and offices in the Department.  USAID has a long and rich evaluation history but neglect and lack of investment, as well as recent foreign assistance reformefforts, have stymied those functions.  The following themes emerged in our study:

The decision to evaluate: when, why and funding:

•  The requirements on the decision to evaluate vary across U.S. agencies. There is no policy or systematic guidance for what should be evaluated and why.  More than three quarters of Survey respondents emphasized the need to make evaluation a requirement and routine part of the foreign assistance programming cycle.

•  Evaluators rarely have the benefit of good baseline data for U.S. foreign assistance projects which makes it difficult to conduct rigorous outcome and impact evaluations that can attribute changes to the project’s investments.

•  While agencies require monitoring and evaluation plans as part of grantee contracts, insufficient funds are set aside for M&E, as partners are pressured to spend limited money on “non-programmatic” costs.

Executing an evaluation:

•  Scopes of work for evaluation often reflect a mismatch between evaluation questions that must be answered and methodology, budget and timeframe given for an evaluation.

•  Because of limited budget and time, the majority of respondents felt  that evaluations were not sufficiently rigorous to provide credible evidence for impact or sustainability.

Impact and utilization of evaluation:

•  Training on M&E is limited across USG agencies.  Program planning, monitoring and evaluation are not included in standard training for State Department Foreign Service Officers or senior managers, a particular challenge when FSOs and Ambassadors become the in-country decision makers on foreign assistance programs.

•  Evaluations do not contribute to agency-wide or interagency knowledge. If “learning” takes place, it is largely confined to the immediate operational unit that commissioned the evaluation rather than contributing to a larger body of knowledge on effective policies and programs.

•  Two thirds of external evaluators polled agreed or strongly agreed that USAID cares more about success stories than careful evaluation.

•  Bureaucratic incentives do not support rigorous evaluation or use of findings – with the possible exception of MCC which supports evaluation but does not yet have a track record on use of findings.

•  Evaluation reports are often too long or technical to be accessible to policymakers and agency leaders with limited time.

Create a Center for Monitoring and Evaluation

A more robust M&E and learning culture for foreign assistance results will not occur without the commitment of USG interagency leadership and authoritative guidance.  Whether or not calls to consolidate agencies and offices disbursing foreign assistance are heeded, the most efficient and effective way to accomplish this learning transformation would be to establish an independent Center for Monitoring and Evaluation (CME), reporting to the Office of the Secretary of State or the Deputy Secretary of State for Management and Resources.  The Center would be placed within the Secretary or Deputy Secretary’s Office to ensure M&E efforts become a central feature of foreign assistance decision-making…”

See the remaining text in the Policy_Brief



Comments?

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: