New journal on Systematic Reviews

from BioMed Central Blog, thanks to tweet by @bengoldacre

“Systematic Reviews, a new journal in the BioMed Central portfolio, launches today. The journal, headed by Editors-in-Chief David Moher, Lesley Stewart and Paul Shekelle, aims to encompass all aspects of the design, conduct and reporting of systematic reviews.

As the first open access journal to focus on systematic reviews and associated literature, Systematic Reviews aims to publish high quality systematic review products including systematic review protocols, systematic reviews related to a very broad definition of health, rapid reviews, updates of already completed systematic reviews, and methods research related to the science of systematic reviews, such as decision modeling. The journal also aims to ensure that the results of all well-conducted systematic reviews are published, regardless of their outcome.

The journal supports innovation and transparency in the reporting of systematic reviews. In a thematic series published upon launch, six articles explore the importance of registering systematic reviews and review protocols, including a commentary from the Chief Medical Officer for the UK, Prof Dame Sally Davies, who writes on the value of registering  reviews from a funder’s perspective.

With the launch of Systematic Reviews, the Editors-in-Chief note that ‘The explosion in the number of systematic reviews being published across a range of disciplines  demonstrates widespread interest in a broad range of systematic review activities and products. Beyond the Cochrane Library there is no journal singularly devoted to all things systematic review. We hope Systematic Reviews will become that journal and that its open access status will attract authors and readers globally.’

The journal will provide an important addition to medical research, in promoting systematic reviews as an important means of analysing and assessing trial outcomes, and developing responses to failing approaches in healthcare treatments and research. The journal has already garnered support from the medical community, with Dr Ben Goldacre, author, journalist and research fellow at the London School of Hygiene and Tropical Medicine stating: ‘Medicine cannot depend on meandering essays, presenting an incomplete or inconsistent view of the scientific literature: to understand whether treatments work or not, we need complete summaries – collating all the evidence – using clearly explained methods to track it down. Systematic reviews are the key, and yet this tool is surprisingly new in medical science. At a time of rising concern about biased under-reporting of negative results, it’s good to see a new open access journal devoted to improving the science of systematic reviews.’

As the Editors-in-Chief note in their launch editorial, ‘individual studies are seldom sufficient to drive change. They are often too small to reach reliable conclusions, and for fair evaluation, it is important to look at the totality (or at least an unbiased sample of the totality) of evidence in favour of, against, or neutral to the healthcare intervention under consideration.’ Systematic Reviews aims to provide the platform for such evaluation, and in doing so, contribute to the wider development and improvement of healthcare.”

RD Comment: These developments are relevant to aid agencies who are commissioning synthesis type studies of large fields of work, such as governance and accountability or livelihoods (both done by DFID recently), and to the evaluators considering this work. And…its great to see that this is an Open Access journal. Well done.

The initial issue is worth scanning, especially the Editorial on the topic of Why prospective registration of systematic reviews makes sense See also: Evidence summaries: the evolution of a rapid review approach

There is more material on the use of systematic reviews re development aid interventions on the 3ie website

On evaluation quality standards: A List

 

The beginnings of a list. Please suggest others by using the Comment facility below

Normative statements:

Standards for specific methods (and fields):

Meta-evaluations:

  • Are Sida Evaluations Good Enough?An Assessment of 34 Evaluation Reports” by Kim Forss, Evert Vedung, Stein Erik Kruse,Agnes Mwaiselage, Anna Nilsdotter, Sida Studies in Evaluation 2008:1  See especially Section 6: Conclusion, 6.1 Revisiting the Quality Questions, 6.2 Why are there Quality Problems with Evaluations?, 6.3 How can the Quality of Evaluations be Improved?, 6.4 Direction of Future Studies. RD Comment:  This study has annexes with empirical data on the quality attributes of  34 evaluation reports published in the Sida Evaluations series between 2003 and 2005. It BEGS a follow up study to see if/how these various quality ratings correlate in any way with the subsequent use of the evaluation reports. Could Sida pursuaded to do something like this?

Ethics focused

  • Australasian Evaluation Society

Journal articles

Checklists:

  • Evaluation checklists prepared by the Western Michegan University ,covering Evaluation Management, Evaluation Models, Evaluation Values and Criteria, Metaevaluation, Evaluation Capacity Building / Institutionalization, and Checklist Creation

Other lists:

Asian Development Bank: 2011 Annual Evaluation Review

Available at ADB website

Background

This report summarizes the key findings and lessons of evaluation studies carried out in 2010, and provides trends in the success rates of ADB operations. It also reviews the recommendations from evaluation reports and the status of actions taken by ADB Management in response to these recommendations. The report also reviews the work program accomplishments of IED in 2010.

Key Findings and Issues

Declining performance in terms of success rates. According to the past data, success rates have not reached 80%, although this is ADB’s corporate target for 2012. Performance began to decline in approval year 2000 after peaking at over 70%. Although the declining trend is supported by a limited sample size, the project and program performance report (PPR) system indicates that this trend will continue unless significant corrective measures are taken. Based on the new PPR system, about 25% of ongoing projects are facing implementation challenges and are at risk of not meeting their objectives (which confirms IED’s previous findings that portfolio performance ratings were overrated in PPRs)-an important issue that needs to be addressed.
Continue reading “Asian Development Bank: 2011 Annual Evaluation Review”

NZAID 2008 Evaluations and Reviews: Annual Report on Quality, 2009

Prepared by Miranda Cahn, Evaluation Advisor, Strategy, Advisory and Evaluation Group, NZAID, Wellington, August 2009. Available online

Executive Summary

Introduction

The New Zealand Agency for International Development (NZAID) is committed to improving evaluative activity1, including evaluations and reviews. Since 2005 NZAID has undertaken annual desk studies of the evaluations and reviews completed by NZAID during the previous calendar year. This 2009 study assesses the quality of 29 NZAID commissioned evaluations and reviews that were submitted to the NZAID Evaluation and Review Committee (ERC) during 2008, and their associated Terms of Reference (TOR). The study identifies areas where quality is of a high standard, and areas where improvement is needed. Recommendations are made on how improvements to NZAID commissioned evaluations and reviews could be facilitated.

The objectives of the study are to:

• assess the quality of the TOR with reference to the NZAID Guidelines on Developing TOR for Reviews and Evaluations

• assess the quality of the NZAID 2008 evaluation and review with reference to the NZAID Evaluation Policy, relevant NZAID Guidelines and Development Assistance Committee of Organisation for Economic Cooperation and Development (DAC) Evaluation Quality Standards

• identify, describe and discuss key quality aspects of the TOR and evaluation and review reports that were of a high standard and those that should be improved in future. Continue reading “NZAID 2008 Evaluations and Reviews: Annual Report on Quality, 2009”

Quality Review consultation ends: 23rd October 2009

The Quality of DFID’s Evaluation Reports and Assurance Systems

Request for comments on reports ( by Roger Riddell, Burt Perrin and Richard Manning) commissioned by IACDI.

As part of its role in monitoring evaluation quality in DFID, IACDI commissioned a review to assess the quality of DFID’s evaluation reports and its assurance systems. The review is now complete and available here on the website in 3 parts. It was undertaken by experts Burt Perrin and Richard Manning, managed by Roger Riddell, a member of IACDI, who has also produced an overview report drawing on and summarising the other two.

The review highlights 11 key recommendations for DFID to improve the quality of evaluation work, and to strengthen DFID’s approach in using evaluation work for lesson learning (see pages iii – vii of Roger Riddell’s report). As part of the review, DFID’s evaluation systems were compared with other donor agencies, and found to be broadly on par with those of comparator bilateral agencies.

IACDI now invites and welcomes comments on the reports – particularly on the overview report by Roger Riddell. IACDI will be discussing the reports at its next meeting on 4th November, and would particularly welcome comments from external stakeholders before then, so that they can be taken into account at its meeting.

Please send comments to mail@iacdi.independent.gov.uk by 23 October 2009.

Postscript (17th November 2009)

On 4th November there was a meeting in London where DFID invited people to comment on the recommendations made by Riddel, Perrin, and Manning. A summary of the issues raised in that meeting is available here. My own feedback, also provided in written form after the meeting, is available here..

BEYOND SUCCESS STORIES: MONITORING & EVALUATION FOR FOREIGN ASSISTANCE RESULTS

EVALUATOR VIEWS OF CURRENT PRACTICE AND RECOMMENDATIONS FOR CHANGE.

This paper was produced independently by Richard Blue, Cynthia Clapp-Wincek, and Holly Benner. MAY 2009

See  Full-Report or Policy_Brief (draft for comment)

Findings derive from literature review, interviews with senior USG officials and primarily interviews with and survey responses from ‘external evaluators’—individuals who conduct evaluations of U.S. foreign assistance programs, either as part of consulting firms, non-governmental organizations (NGOs), or as individual consultants.   External evaluators were chosen because: 1) the authors are external evaluators themselves with prior USAID and State experience; 2) in recent years, the majority of evaluations completed of USG foreign assistance programs have been contracted out to external evaluation experts; and 3) evaluators are hired to investigate whether foreign assistance efforts worked, or didn’t work, and to ask why results were, or were not, achieved.  This gives them a unique perspective”

Key Findings – Monitoring

The role of monitoring is to determine the extent to which the expected outputs or outcomes of a program or activity are being achieved.  When done well, monitoring can be invaluable to project implementers and managers to make mid-course corrections to maximize project impact.  While monitoring requirements and practice vary across  U.S. agencies and departments, the following broad themes emerged from our research;

•  The role of monitoring in the USG foreign assistance community has changed dramatically in the last 15 years.  The role of USG staff has shifted to primarily monitoring contractors and grantees.  Because this distances USG staff from implementation of programs, it has resulted in the loss of dialogue, debate and learning within agencies.

•  The myriad of foreign assistance objectives requires a multiplicity of indicators. This has led to onerous reporting requirements that try to cover all bases.

•  There is an over reliance on quantitative indicators and outputs of deliverables over which the project implementers have control (such as number of people trained) rather than qualitative indicators and outcomes, expected changes in attitudes, knowledge, andbehaviors.

•  There is no standard guidance for monitoring foreign assistance programs—the requirements at MCC are very different  from those at DOS and USAID.  Some implementing agencies and offices have no guidance or standard procedures.

Key Findings – Evaluation

There is also great diversity in the evaluation policies and practices across USG agencies administering foreign assistance.  MCC has designed a very robust impact evaluation system for its country compacts, but these evaluations have yet to be completed. The Education and Cultural Affairs Bureau at the State Department has well respected evaluation efforts, but there is limited evaluation work in other bureaus and offices in the Department.  USAID has a long and rich evaluation history but neglect and lack of investment, as well as recent foreign assistance reformefforts, have stymied those functions.  The following themes emerged in our study:

The decision to evaluate: when, why and funding:

•  The requirements on the decision to evaluate vary across U.S. agencies. There is no policy or systematic guidance for what should be evaluated and why.  More than three quarters of Survey respondents emphasized the need to make evaluation a requirement and routine part of the foreign assistance programming cycle.

•  Evaluators rarely have the benefit of good baseline data for U.S. foreign assistance projects which makes it difficult to conduct rigorous outcome and impact evaluations that can attribute changes to the project’s investments.

•  While agencies require monitoring and evaluation plans as part of grantee contracts, insufficient funds are set aside for M&E, as partners are pressured to spend limited money on “non-programmatic” costs.

Executing an evaluation:

•  Scopes of work for evaluation often reflect a mismatch between evaluation questions that must be answered and methodology, budget and timeframe given for an evaluation.

•  Because of limited budget and time, the majority of respondents felt  that evaluations were not sufficiently rigorous to provide credible evidence for impact or sustainability.

Impact and utilization of evaluation:

•  Training on M&E is limited across USG agencies.  Program planning, monitoring and evaluation are not included in standard training for State Department Foreign Service Officers or senior managers, a particular challenge when FSOs and Ambassadors become the in-country decision makers on foreign assistance programs.

•  Evaluations do not contribute to agency-wide or interagency knowledge. If “learning” takes place, it is largely confined to the immediate operational unit that commissioned the evaluation rather than contributing to a larger body of knowledge on effective policies and programs.

•  Two thirds of external evaluators polled agreed or strongly agreed that USAID cares more about success stories than careful evaluation.

•  Bureaucratic incentives do not support rigorous evaluation or use of findings – with the possible exception of MCC which supports evaluation but does not yet have a track record on use of findings.

•  Evaluation reports are often too long or technical to be accessible to policymakers and agency leaders with limited time.

Create a Center for Monitoring and Evaluation

A more robust M&E and learning culture for foreign assistance results will not occur without the commitment of USG interagency leadership and authoritative guidance.  Whether or not calls to consolidate agencies and offices disbursing foreign assistance are heeded, the most efficient and effective way to accomplish this learning transformation would be to establish an independent Center for Monitoring and Evaluation (CME), reporting to the Office of the Secretary of State or the Deputy Secretary of State for Management and Resources.  The Center would be placed within the Secretary or Deputy Secretary’s Office to ensure M&E efforts become a central feature of foreign assistance decision-making…”

See the remaining text in the Policy_Brief



Metaevaluation revisited, by Michael Scriven

An Editorial in Journal of MultiDisciplinary Evaluation, Volume 5, Number 11, January 2009

In this short and readable paper Michael Scriven addresses “three categories of issues that arise about meta-evaluation: (i) exactly what is it; (ii) how is it justified; (iii) when and how should it be used? In the following, I say something about all three—definition, justification, and application.” He then makes seven main points, each of which he elaborates on in some detail:

  1. Meta-evaluation is the consultant’s version of peer review.
  2. Meta-evaluation is the proof that evaluators believe what they say.
  3. In meta-evaluation, as in  all evaluation, check the pulse before trimming the nails.
  4. A partial meta-evaluation is better than none.
  5. Make the most of meta-evaluation.
  6. Any systematic approach to evaluation—in other words, almost any kind of professional evaluation—automatically provides a systematic basis for meta-evaluation.
  7. Fundamentally, meta-evaluation, like evaluation, is simply an extension of common sense—and that’s the first defense to use against the suggestion that it’s some kind of fancy academic embellishment.

MAPPING OF MONITORING AND EVALUATION PRACTICES AMONG DANISH NGOS

Final Report May 2008, Hanne Lund Madsen, HLM Consult

As a first step in the follow-up to the Assessment of the Administration of the Danish NGO Support, the Evaluation Department of the Ministry in cooperation with the Quality Assurance Department and the NGO Department wished to map the existing evaluation and monitoring practices among the Danish NGOs with a view to establishing the basis for a later assessment of how Danida can systematize the use of results, measurements and evaluations within the NGO sector.

The mapping has entailed the consideration of M&E documentation from 35 NGOs, bilateral consultation with 17 NGOs, interviews with other stakeholders within the Ministry, the Danish Resource base, Projektrådgivningen and a mini-seminar with Thematic Forum.

%d bloggers like this: