Meta-evaluation of USAID’s Evaluations: 2009-2012

Author(s):Molly Hageboeck, Micah Frumkin, and Stephanie Monschein
Date Published:November 25, 2013

Report available as a pdf (a big file). See also video and PP presentations (worth reading!)

Context and Purpose

This evaluation of evaluations, or meta-evaluation, was undertaken to assess the quality of USAID’s evaluation reports. The study builds on USAID’s practice of periodically examining evaluation quality to identify opportunities for improvement. It covers USAID evaluations completed between January 2009 and December 2012. During this four-year period, USAID launched an ambitious effort called USAID Forward, which aims to integrate all aspects of the Agency’s programming approach, including program and project evaluations, into a modern, evidence-based system for realizing development results. A key element of this initiative is USAID’s Evaluation Policy, released in January 2011.

Meta-Evaluation Questions

The meta-evaluation on which this volume reports systematically examined 340 randomly selected evaluations and gathered qualitative data from USAID staff and evaluators to address three questions:

1. To what degree have quality aspects of USAID’s evaluation reports, and underlying practices, changed over time?

2. At this point in time, on which evaluation quality aspects or factors do USAID’s evaluation reports excel and where are they falling short?

3. What can be determined about the overall quality of USAID evaluation reports and where do the greatest opportunities for improvement lie?

 Meta-Evaluation Methodology and Study Limitations

The framework for this study recognizes that undertaking an evaluation involves a partnership between the client for an evaluation (USAID) and the evaluation team. Each party plays an important role in ensuring overall quality. Information on basic characteristics and quality aspects of 340 randomly selected USAID evaluation reports was a primary source for this study. Quality aspects of these evaluations were assessed using a 37-element checklist. Conclusions reached by the meta-evaluation also drew from results of four small-group interviews with staff from USAID’s technical and regional bureaus in Washington, 15 organizations that carry out evaluations for USAID, and a survey of 25 team leaders of recent USAID evaluations. MSI used chi-square and t–tests to analyze rating data. Qualitative data were analyzed using content analyses. No specific study limitation unduly hampered MSI’s ability to obtain or analyze data needed to address the three meta-evaluation questions. Nonetheless, the study would have benefited from reliable data on the cost and duration of evaluations, survey or conference call interviews with USAID Mission staff, and the consistent inclusion of the names of evaluation team leaders in evaluation reports.”

Rick Davies comment: Where is the dataset? 340 evaluations were scored on a 37 point checklist. Ten of the 37 checklist items used to creat an overall “score” This data could be analysed in N different ways by many more people, it it was made readily available. Responses please, from anyone..

 

Comments?

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: