Evaluating the Evaluators: Some Lessons from a Recent World Bank Self-Evaluation

February 21, 2012 blog posting by Johannes Linn, at Brookings
Found via @WorldBank_IEG tweet

“The World Bank’s Independent Evaluation Group (IEG) recently published a self-evaluation of its activities. Besides representing current thinking among evaluation experts at the World Bank, it also more broadly reflects some of the strengths and gaps in the approaches that evaluators use to assess and learn from the performance of the international institutions with which they work…. Johannes Linn served as an external peer reviewer of the self-evaluation and provides a bird’s-eye view on the lessons learned.

Key lessons as seen by Linn

  • An evaluation of evaluations should focus not only on process, but also on the substantive issues that the institution is grappling with.
  • An evaluation of the effectiveness of evaluations should include a professional assessment of the quality of evaluation products.
  • An evaluation of evaluations should assess:
    o How effectively impact evaluations are used;
    o How scaling up of successful interventions is treated;
    o How the experience of other comparable institutions is utilized;
    o Whether and how the internal policies, management practices and incentives of the institution are effectively assessed;
    o Whether and how the governance of the institution is evaluated; and
    o Whether and how internal coordination, cooperation and synergy among units within the organizations are assessed

Read the complete posting, with arguments behind each of the above points, here

“Unleashing the potential of AusAID’s performance data”

A posting on the Development Policy Blog by Stephen Howes, on 15 february 2012.

This blog examines AusAID’s Office of Development Effectiveness latest annual report released just before Christmas 2010, which was published in two parts, one providing an international comparative perspective (and summarized in this blog), the other drawing on and assessing internal performance reporting. In this blog the author continues his analysis of the  “internal assessment” report.

He points out how the report data shows that poor performance is a much more significant problem than outright fraud. He also examines the results of ODE’s spotchecks on the quality of the self-assessment ratings. There is much else there in the blog that is also of interest.

Of special interest are the concluding paras: “This systematic collation of project self-ratings and the regular use of spot checks is best practice for any aid agency, and something AusAID should take pride in. The problem is that, as illustrated above, the reporting and analysis of these two rich sources of data is at the current time hardly even scratching the surface of their potential.

One way forward would be for ODE or some other part of AusAID to undertake and publish a more comprehensive report and analysis of this data. That would be a good idea, both to improve aid effectiveness and to enhance accountability.

But I have another suggestion. If the data is made public, we can all do our own analysis. This would tremendously enhance the debate in Australia on aid effectiveness, and take the attention away from red-herrings such as fraud towards real challenges such as  value-for-money.

AusAID’s newly-released Transparency Charter[pdf] commits the organization to releasing publishing “detailed information on AusAID’s work” including “the results of Australian aid activities and our evaluations and research.”  The annual release of both the self-ratings and the spot-checks would be a simple step, but one which would go a long way to fulfilling  the Charter’s commitments.”

PS: Readers may be interested in similar data made available by DFID in recent years. See Do we need a minimum level of failure blog posting

 

Smart Tools: For evaluating information projects, products and services

Produced by CTA, KIT, IICD. 2nd (2009) edition

PDF version available online

“About the Toolkit

The Smart Toolkit focuses on the evaluation of information projects, products and services from a learning perspective. It looks at evaluation within the context of the overall project cycle, from project planning and implementation to monitoring, evaluation and impact assessment, and then at the evaluation process itself, the tools involved and examples of their application.The theme running throughout the toolkit is:

Participatory evaluation for learning and impact Continue reading “Smart Tools: For evaluating information projects, products and services”

%d bloggers like this: