New journal on Systematic Reviews

from BioMed Central Blog, thanks to tweet by @bengoldacre

“Systematic Reviews, a new journal in the BioMed Central portfolio, launches today. The journal, headed by Editors-in-Chief David Moher, Lesley Stewart and Paul Shekelle, aims to encompass all aspects of the design, conduct and reporting of systematic reviews.

As the first open access journal to focus on systematic reviews and associated literature, Systematic Reviews aims to publish high quality systematic review products including systematic review protocols, systematic reviews related to a very broad definition of health, rapid reviews, updates of already completed systematic reviews, and methods research related to the science of systematic reviews, such as decision modeling. The journal also aims to ensure that the results of all well-conducted systematic reviews are published, regardless of their outcome.

The journal supports innovation and transparency in the reporting of systematic reviews. In a thematic series published upon launch, six articles explore the importance of registering systematic reviews and review protocols, including a commentary from the Chief Medical Officer for the UK, Prof Dame Sally Davies, who writes on the value of registering  reviews from a funder’s perspective.

With the launch of Systematic Reviews, the Editors-in-Chief note that ‘The explosion in the number of systematic reviews being published across a range of disciplines  demonstrates widespread interest in a broad range of systematic review activities and products. Beyond the Cochrane Library there is no journal singularly devoted to all things systematic review. We hope Systematic Reviews will become that journal and that its open access status will attract authors and readers globally.’

The journal will provide an important addition to medical research, in promoting systematic reviews as an important means of analysing and assessing trial outcomes, and developing responses to failing approaches in healthcare treatments and research. The journal has already garnered support from the medical community, with Dr Ben Goldacre, author, journalist and research fellow at the London School of Hygiene and Tropical Medicine stating: ‘Medicine cannot depend on meandering essays, presenting an incomplete or inconsistent view of the scientific literature: to understand whether treatments work or not, we need complete summaries – collating all the evidence – using clearly explained methods to track it down. Systematic reviews are the key, and yet this tool is surprisingly new in medical science. At a time of rising concern about biased under-reporting of negative results, it’s good to see a new open access journal devoted to improving the science of systematic reviews.’

As the Editors-in-Chief note in their launch editorial, ‘individual studies are seldom sufficient to drive change. They are often too small to reach reliable conclusions, and for fair evaluation, it is important to look at the totality (or at least an unbiased sample of the totality) of evidence in favour of, against, or neutral to the healthcare intervention under consideration.’ Systematic Reviews aims to provide the platform for such evaluation, and in doing so, contribute to the wider development and improvement of healthcare.”

RD Comment: These developments are relevant to aid agencies who are commissioning synthesis type studies of large fields of work, such as governance and accountability or livelihoods (both done by DFID recently), and to the evaluators considering this work. And…its great to see that this is an Open Access journal. Well done.

The initial issue is worth scanning, especially the Editorial on the topic of Why prospective registration of systematic reviews makes sense See also: Evidence summaries: the evolution of a rapid review approach

There is more material on the use of systematic reviews re development aid interventions on the 3ie website

Randomised controlled trial testing the effects of transparency on health care in Uganda

(from the great AidInfo website)

“At aidinfo we conduct research and liaise with aid donors and recipients to build up a case for aid transparency. We want to show that improving and increasing the amount that donors report on their aid contributions can help communities to track aid spending. In turn, donors and governments will be more accountable for their aid spending. It is expected that in this way aid will reach more people on the ground, helping to contribute more in the fight against poverty.

This is all well and good, but it is difficult to prove. Svensson’s work, then, is of great importance to us here.

This Study by Reinikka and Svensson (2005) found that in 1995 only 20 percent of a primary education grant program to rural Uganda actually reached its intended target. This figure rose by a striking 60 percent in 2001 when information was published detailing where this money was going; a full 80 percent of funds reached their intended destination, greatly improving education services in the area.

Björkman and Svensson (2009) followed up on this study with a compelling randomised controlled trial testing the effects of transparency on health care in Uganda. The experiment randomly assigned community health clinics to receive published ‘report cards’ and NGO-organised public meetings on the quality of the clinics’ health care.

The results of this transparency ‘treatment’ rivalled the effects of the best health interventions involving expensive new medicines, equipment, and procedures. Waiting time for care decreased, absenteeism among doctors and nurses plummeted, clinics got cleaner, fewer drugs were stolen, 40-50 percent more children received dietary supplements and vaccines, health services got used more, and, powerfully, 33 percent fewer children died under the age of five. This amounted to 550 saved lives in a small area of Uganda encompassing merely 55,000 households.

This is strong evidence that access to information about services empowers citizens to get better services and saves lives.”

3ie’s proposes a Commitment to Evaluation Indicator (c2e)

International Initiative for Impact Evaluation (3ie) -Terms of Reference for a Research Consultancy– White paper for the Commitment to Evaluation Indicator

“Background: Experience to date shows that the use of evidence by donors and governments when designing and adopting development programmes remains sporadic. There are many examples where a programme was shown to have no impact but was expanded, as well as examples of programmes with positive impact being terminated. To promote better use of evaluation evidence in policy making and programme design, 3ie is launching a Commitment to Evaluation (c2e) indicator. The indicator will provide a measurement of government and donor agency use of evaluation evidence allowing for recognition and reward for progress and good practice. The indicator will be developed and piloted in 2012 for donor agencies with the intent to recognize donors that make systematic use of evidence and thus motivate others to do the same.

3ie’s initiative follows the example of other successful efforts to use awards or indexes to focus the attention of policymakers. Indexes such as the UN Development Programme’s Human Development index, Transparency International’s Corruption Perception index, and the Centre for Global Development’s Quality of ODA (QuODA) index have raised awareness on key issues and influenced practice of governments and development agencies. The Mexican National Council for the Evaluation of Social Development Policy (CONEVAL) annual award for good practices in social evaluation has strengthened political buy-in and commitment to evaluation in Mexico. In developing this c2e indicator, 3ie will draw from the lessons learned by similar initiatives on how best to motivate and award evaluation practices and build and run an effective cross-agency and cross-country indicator. More detailed background information on the rationale and theory of change behind the project is available in the discussion note in the annex.” See ToRs for rest of the text including annex.

3ie and the Funding of Impact Evaluations

A DISCUSSION PAPER FOR 3IE’S MEMBERS. by Rick Davies, July 2011. Commissioned by the Office of Development Effectiveness, AusAI. Available as pdf.

The purpose of this discussion paper is to inform AusAID’s and other 3ie members’ engagement with 3ie (the International Initiative for Impact Evaluation). It precedes the forthcoming evaluation of 3ie, and is more limited in scope. It is expected to be complementary and useful to the larger Department for International Development (DFID) study now underway, Developing a broader range of rigorous designs and methods for impact evaluations, as the author of this report is also a member of that study team.

AusAID is a member of 3ie and provides core funding to 3ie to contribute to the global public good of policy-relevant evidence on what works in development. Direct benefit to AusAID is not the purpose of the membership. However, it is important to AusAID that 3ie’s work is relevant to AusAID’s partners, particularly partners with low income and/or in fragile countries. AusAID’s Office of Development Effectiveness (ODE) manages AusAID’s membership of 3ie and has commissioned this discussion paper.

The focus of this discussion paper is on 3ie methodological approach, used in both the funded impact evaluations and systematic reviews, and how this has changed over time. Continue reading “3ie and the Funding of Impact Evaluations”

RealWorld Evaluation Working Under Budget, Time, Data, and Political Constraints

Second Edition, by Michael Bamberger, Jim Rugh, Linda Mabry. Sage Publications,  Nov 2011,

This book addresses the challenges of conducting program evaluations in real-world contexts where evaluators and their clients face budget and time constraints and where critical data may be missing. The book is organized around a seven-step model developed by the authors, which has been tested and refined in workshops and in practice. Vignettes and case studies—representing evaluations from a variety of geographic regions and sectors—demonstrate adaptive possibilities for small projects with budgets of a few thousand dollars to large-scale, long-term evaluations of complex programs. The text incorporates quantitative, qualitative, and mixed-method designs, and this Second Edition reflects important developments in the field since the publication of the First Edition. ”

See also the associated website: http://www.realworldevaluation.org/ Bamberger and Rugh have presented many workshops on RealWorld Evaluation in many countries. A copy of various versions and translations of the PowerPoint presentations and other materials are accessible on the next pages of this website. Continue reading “RealWorld Evaluation Working Under Budget, Time, Data, and Political Constraints”

First reports published by UK Independent Commission for Aid Impact

…on 22nd November, 2011.

Two cover general areas of the programme:

ICAI’s Approach to Effectiveness and Value for Money; and

The Department for International Development’s (DFID) Approach to Anti-Corruption;

Two cover specific programmes in DFID’s country offices:

DFID’s Climate Change Programme in Bangladesh; and

DFID’s Support to the Health Sector in Zimbabwe.

See the ICAI website for further details

RD Comment: re “ICAI’s Approach to Effectiveness and Value for Money” paper, see my Comments here. In summary:•

  • This paper is confusingly titled. It is really about the ICAIs overall approach to evaluation, and covers more than “value for money and effectiveness”
  • The 4e’s analysis of the concepts of “value for money and effectiveness” has potential, but seems to be taken nowhere thereafter.
  • The proposed workings of the traffic light system are opaque. It is not clear how these judgements will be built up from subsidiary judgements. Nor what they mean in the simplest terms of success and failure. Nor is there a “None of the above. There is not sufficient information to make a judgement”

The Impact of Economics Blogs

David McKenzie (World Bank, BREAD, CEPR and IZA) and Berk Özler (World Bank). Policy Research Working Paper 5783. August 2011. Available as pdf. See also the authors’ blog about this paper.

Introduction: Practically nonexistent a decade ago, blogs by economic scholars have become commonplace. Economics blogs, such as Freakonomics, Marginal Revolution, Paul Krugman and Greg Mankiw, have built large followings – whether measured by subscriptions in Google Reader or by average daily page views (1). Cowen (2008) argues that blogs are the main way that the general public consumes economics in a given day and guesstimates that “…about 400,000 people are reading economics blogs and digesting them” on a daily basis.

These blogs not only give their creators an outlet to disseminate their ideas and work immediately in a format that is more accessible, but also enable instant feedback, are easy to share on the open web, and allow the bloggers a personal style rather than the inaccessible format of academic journals (Glenn, 2003; Dunleavy and Gilson 2011).

Our motivation in examining the impact of economics blogs stems from two observations about blogs and questions that arise from these. First, it seems fair to state that “…informing is the core business of blogging.” (McKenna and Pole 2008, p. 102) This leads to the question of whether blogs improve the dissemination of research findings and whether their readers are indeed more informed (2). On the one hand, coupling the large readership of blogs with the argument of Cowen (2008) that the best ones are written at a level far higher than that of any major newspapers offers the promise that economics blogs may have sizeable effects on the dissemination of economic research and on the knowledge and attitudes of their readers.
Continue reading “The Impact of Economics Blogs”

THE EVALUATION OF ORGANIZATION PERFORMANCE: NORMATIVE PRESCRIPTIONS VS. EMPIRICAL RESULTS

Vic Murray, University of Victoria, 2004. Available as pdf

Abstract: This paper reviews the underlying theoretical bases for the evaluation of organizational performance. It then examines representative samples of empirical research into actual evaluation practices in a variety of nonprofits in Canada, the U.S. and Britain. Some of the most popular tools and systems for evaluation currently recommended by consultants and others are then reviewed. Looking at this prescriptive literature, it is shown that, by and large, it takes little account of the findings of empirical research and, as a result, its approaches may often prove ineffective. An alternative that attempts to integrate the research findings with practical tools that has value for practitioners is then be suggested.

Introduction

It is a perplexing, but not uncommon, phenomenon in the world of nonprofit organization studies how little connection there is between the work of those who offer advice on how organizations in this sector might become more effective and that of those who carry out formally designed empirical research into how these organizations actually behave.  Nowhere is this gap between  “how to” and “what is” more apparent than in the field of performance assessment and evaluation.

Commons Select Committee to Scrutinise the DFID’s Annual Report & Resource Accounts

13 September 2011

“The International Development Committee is to conduct an inquiry into the Department for International Development’s Annual Report and Accounts 2010-11 and the Department’s Business Plan 2011-15.

Invitation to submit Written Evidence

The Committee will be considering

  • Changes since the election to DFID’s, role, policies, priorities and procedures;
  • The implications of changes for management styles, structures, staffing competences and capacity to deliver; and
  • The overall impact on the efficiency, effectiveness and cost-effectiveness of DFID’s activities.

The Committee invites short written submissions from interested organisations and individuals, especially on the following areas: the implementation of the structural reform plan: the bilateral, multilateral and humanitarian reviews; DFID administration costs; expenditure on, and dissemination of research; and the use of technical assistance and consultants.

The deadline for submitting written evidence is Monday 10 October 2011. A guide for written submissions to Select Committees may be found on the parliamentary website at: http://www.parliament.uk/commons/selcom/witguide.htm

FURTHER INFORMATION:
Committee Membership is as follows: Malcolm Bruce MP, Chair (Lib Dem, Gordon), Hugh Bayley MP (Lab, City of York), Richard Burden MP (Lab, Birmingham, Northfield), Sam Gyimah MP (Con, East Surrey), Richard Harrington MP (Con, Watford), Pauline Latham MP (Con, Mid Derbyshire), Jeremy Lefroy MP (Con, Stafford), Michael McCann MP (Lab, East Kilbride, Strathaven and Lesmahagow), Alison McGovern MP (Lab, Wirral South), Anas Sarwar MP (Lab, Glasgow Central), Chris White MP (Con, Warwick and Leamington).
Specific Committee Information: indcom@parliament.uk / 020 7219 1223/ 020 7219 1221
Media Information: daviesnick@parliament.uk / 020 7219 3297 Committee Website: www.parliament.uk/indcom

Measuring Impact on the Immeasurable? Methodological Challenges in Evaluating Democracy and Governance Aid

by Jennifer Gauck, University of Kent, Canterbury – Department of Politics, 2011. APSA 2011 Annual Meeting Paper. Available as pdf

Abstract:

“Recent debates over the quality, quantity and purpose of development aid has led to a renewed emphasis on whether, and in what circumstances, aid is effective in achieving development outcomes. A central component of determining aid effectiveness is the conduct of impact evaluations, which assess the changes that can be attributed to a particular project or program. While many impact evaluations use a mixed-methods design, there is a perception that randomized control trials (RCTs) are promoted as the “gold standard” in impact evaluation. This is because the randomization process minimizes selection bias, allowing for the key causal variables leading to the outcome to be more clearly identified. However, many development interventions cannot be evaluated via RCTs because the nature of the intervention does not allow for randomization with a control group or groups.”

“This paper will analyze the methodological challenges posed by aid projects whose impacts cannot be evaluated using randomized control trials, such as certain democracy and governance (D&G) interventions. It will begin with a discussion of the merits and drawbacks of cross-sectoral methods and techniques commonly used to assess impact across a variety of aid interventions, including RCTs, and how these methods typically combine in an evaluation to tell a persuasive causal story. This paper will then survey the methods different aid donors are using to evaluate the impact of projects that cannot be randomized, such as governance-strengthening programs aimed at a centralized public-sector institution. Case studies will be drawn from examples in Peru and Indonesia, among others. This paper will conclude by analyzing how current methodological emphases in political science can be applied to impact evaluation processes generally, and to D&G evaluations specifically.”

RD Comment: See also the 3ie webpage on Useful resources for impact evaluations in governance which includes a list of relevant books, reports, papers, impact evaluations, systematic reviews, survey modules/tools and website

%d bloggers like this: