World Bank – Raising the Bar on Transparency, Accountability and Openness

Blog posting by Hannah George on Thu, 02/16/2012 – 18:01 Found via @TimShorten

“The World Bank has taken landmark steps to make information accessible to the public and globally promote transparency and accountability, according to the first annual report on the World Bank’s Access to Information (AI) Policy.[20/02/2012 – links is not working – here is a link to a related doc, World Bank Policy on Access to Information Progress Report : January through March 2011]

“The World Bank’s Access to Information Policy continues to set the standard for other institutions to strive for,” said Chad Dobson, executive director of the Bank Information CenterPublish What You Fund recently rated the Bank “best performer” in terms of aid transparency out of 58 donors for the second year in a row.  Furthermore, the Center for Global Development and Brookings ranked the International Development Association (the World Bank’s Fund for the Poorest) as a top donor in transparency and learning in its 2011 Quality of Official Development Assistance Assessment (QuODA).

“Unleashing the potential of AusAID’s performance data”

A posting on the Development Policy Blog by Stephen Howes, on 15 february 2012.

This blog examines AusAID’s Office of Development Effectiveness latest annual report released just before Christmas 2010, which was published in two parts, one providing an international comparative perspective (and summarized in this blog), the other drawing on and assessing internal performance reporting. In this blog the author continues his analysis of the  “internal assessment” report.

He points out how the report data shows that poor performance is a much more significant problem than outright fraud. He also examines the results of ODE’s spotchecks on the quality of the self-assessment ratings. There is much else there in the blog that is also of interest.

Of special interest are the concluding paras: “This systematic collation of project self-ratings and the regular use of spot checks is best practice for any aid agency, and something AusAID should take pride in. The problem is that, as illustrated above, the reporting and analysis of these two rich sources of data is at the current time hardly even scratching the surface of their potential.

One way forward would be for ODE or some other part of AusAID to undertake and publish a more comprehensive report and analysis of this data. That would be a good idea, both to improve aid effectiveness and to enhance accountability.

But I have another suggestion. If the data is made public, we can all do our own analysis. This would tremendously enhance the debate in Australia on aid effectiveness, and take the attention away from red-herrings such as fraud towards real challenges such as  value-for-money.

AusAID’s newly-released Transparency Charter[pdf] commits the organization to releasing publishing “detailed information on AusAID’s work” including “the results of Australian aid activities and our evaluations and research.”  The annual release of both the self-ratings and the spot-checks would be a simple step, but one which would go a long way to fulfilling  the Charter’s commitments.”

PS: Readers may be interested in similar data made available by DFID in recent years. See Do we need a minimum level of failure blog posting

 

Conference: Measuring Impact of Higher Education for Development

From: Monday 19th March 2012 to Tuesday 20th March 2012

Venue:  Birkbeck College, Malet Street, London

Organisers: London International Development Centre (LIDC); Association of Commonwealth Universities (ACU)

Background: Higher education for international development has been, in recent years, a neglected area relative to other educational interventions. Yet higher education (HE) is necessary for the attainment of Millennium Development Goals (MDGs) and for economic development in low and middle income countries.

There is a long history of development assistance interventions in HE to support development goals, directed at strengthening individual, organisational and institutional capacity. These have included scholarship programmes as well as support to specific universities and university networks in low and middle income countries, and support to academic research and training partnerships.
However, there has been little comparison of these different interventions in terms of their international development impact. This limits our understanding of “what works” in HE interventions for development, and our ability to invest effectively in future.
The aim of this two-day international conference is to examine the current status of impact evaluation for HE interventions and to identify research gaps and needs for the future. The conference will focus on three issues:
  • What has been, and should be, the development intention of HE interventions?
  • How should development impacts be measured?
  • What is our experience with measurement methods and tools to date, where are the gaps and what research priorities emerge?

The programme will be posted online soon.

Who should attend:

The conference will bring together experts from three research sectors: higher education, international development and impact evaluation from academia, think tanks, government agencies and civil society organisations. PhD students are welcome if their research is relevant to the theme of the conference.

Registration is open between 2 February and 5 March 2012.
To register, please fill in and return the registration form.
Attendance is free of charge.

Making systematic reviews work for international development research

ODI Discussion paper, January 2012 4 pages

Authors: Jessica Hagen-Zanker, Maren Duvendack, Richard Mallett and Rachel Slater with Samuel Carpenter and Mathieu Tromme

This briefing paper reflects upon the use of systematic reviews in international development research. It attempts to identify where a systematic review approach adds value to development research and where it becomes problematic.

The question of ‘what works’ in international development policy and practice is becoming ever more important against a backdrop of accountability and austerity. In order to answer this question, there has been a surge of interest in ‘evidence-informed policy making’.

Systematic reviews are a rigorous and transparent form of literature review, and are increasingly considered a key tool for evidence-informed policy making. Subsequently, a number of donors – most notably the UK Department for International Development (DFID) and AusAid – are focusing attention and resources on testing the appropriateness of systematic reviews in assessing the impacts of development and humanitarian interventions.

This briefing paper reflects upon the use of systematic reviews in international development research and argues:

  • Using systematic review principles can help researchers improve the rigour and breadth of literature reviews
  • Conducting a full systematic review is a resource intensive process and involves a number of practical challenges
  • Systematic reviews should be viewed as a means to finding a robust and sensible answer to a focused research question

3ie have subsequently provided this Commentary

There has also been a discussion on ODI Blog Posts, 27 January 2012

See also the DFID Nov 2011 background page on “Systematic Reviews in International Development : An Initiative to Strengthen Evidence-Informed Policy Making

 

What shapes research impact on policy?

…Understanding research uptake in sexual and reproductive health policy processes in resource poor contexts

Andy Sumner, Jo Crichton, Sally Theobald, Eliya Zulu and Justin Parkhurst. Health Research Policy and Systems 2011, 9(Suppl 1):S3 Published: 16 June 2011

Abstract “Assessing the impact that research evidence has on policy is complex. It involves consideration of conceptual issues of what determines research impact and policy change. There are also a range of methodological issues relating to the question of attribution and the counter-factual. The dynamics of SRH, HIV and AIDS, like many policy arenas, are partly generic and partly issue- and context-specific. Against this background, this article reviews some of the main conceptualisations of research impact on policy, including generic determinants of research impact identified across a range of settings, as well as the specificities of SRH in particular. We find that there is scope for greater cross-fertilisation of concepts, models and experiences between public health researchers and political scientists working in international development and research impact evaluation. We identify aspects of the policy landscape and drivers of policy change commonly occurring across multiple sectors and studies to create a framework that researchers can use to examine the influences on research uptake in specific settings, in order to guide attempts to ensure uptake of their findings. This framework has the advantage that distinguishes between pre-existing factors influencing uptake and the ways in which researchers can actively influence the policy landscape and promote research uptake through their policy engagement actions and strategies. We apply this framework to examples from the case study papers in this supplement, with specific discussion about the dynamics of SRH policy processes in resource poor contexts. We conclude by highlighting the need for continued multi-sectoral work on understanding and measuring research uptake and for prospective approaches to receive greater attention from policy analysts.”

Conference: Evaluation in a Complex World -Balancing Theory and Practice

April 29- May 1, 2012 (Sunday-Tuesday)
Seaview Resort, Galloway, NJ, USA. (http://www.dolce-seaview-hotel.com)

Organised by the Eastern Evaluation Research Society, a Regional Affiliate of the American Evaluation Association. Flyer available here

Keynote Speaker: Jennifer Greene, University of Illinois and President of AEA Featured Speakers: Eleanor Chelimsky, U.S. Government Accountability Office and former AEA President Rodney Hopson, Dusquesne University and incoming President of AEA

Sunday Afternoon Pre-Conference Workshops and Session: Meta Analysis Ning Rui, Research for Better Schools

Focus Group Research: Planning and Implementation Michelle Revels, ICF International

Career Talk with the Experts (NEW!): An unstructured conversation about your evaluation career This session is free to participants! Sunday Evening Interactive & Networking Session: John Kelley, Villanova University Concurrent Sessions Featuring: Skill Building Sessions, Individual Presentations & Panel Sessions

A full conference program will be posted at (www.eers.org) by Mid February 2012.

New journal on Systematic Reviews

from BioMed Central Blog, thanks to tweet by @bengoldacre

“Systematic Reviews, a new journal in the BioMed Central portfolio, launches today. The journal, headed by Editors-in-Chief David Moher, Lesley Stewart and Paul Shekelle, aims to encompass all aspects of the design, conduct and reporting of systematic reviews.

As the first open access journal to focus on systematic reviews and associated literature, Systematic Reviews aims to publish high quality systematic review products including systematic review protocols, systematic reviews related to a very broad definition of health, rapid reviews, updates of already completed systematic reviews, and methods research related to the science of systematic reviews, such as decision modeling. The journal also aims to ensure that the results of all well-conducted systematic reviews are published, regardless of their outcome.

The journal supports innovation and transparency in the reporting of systematic reviews. In a thematic series published upon launch, six articles explore the importance of registering systematic reviews and review protocols, including a commentary from the Chief Medical Officer for the UK, Prof Dame Sally Davies, who writes on the value of registering  reviews from a funder’s perspective.

With the launch of Systematic Reviews, the Editors-in-Chief note that ‘The explosion in the number of systematic reviews being published across a range of disciplines  demonstrates widespread interest in a broad range of systematic review activities and products. Beyond the Cochrane Library there is no journal singularly devoted to all things systematic review. We hope Systematic Reviews will become that journal and that its open access status will attract authors and readers globally.’

The journal will provide an important addition to medical research, in promoting systematic reviews as an important means of analysing and assessing trial outcomes, and developing responses to failing approaches in healthcare treatments and research. The journal has already garnered support from the medical community, with Dr Ben Goldacre, author, journalist and research fellow at the London School of Hygiene and Tropical Medicine stating: ‘Medicine cannot depend on meandering essays, presenting an incomplete or inconsistent view of the scientific literature: to understand whether treatments work or not, we need complete summaries – collating all the evidence – using clearly explained methods to track it down. Systematic reviews are the key, and yet this tool is surprisingly new in medical science. At a time of rising concern about biased under-reporting of negative results, it’s good to see a new open access journal devoted to improving the science of systematic reviews.’

As the Editors-in-Chief note in their launch editorial, ‘individual studies are seldom sufficient to drive change. They are often too small to reach reliable conclusions, and for fair evaluation, it is important to look at the totality (or at least an unbiased sample of the totality) of evidence in favour of, against, or neutral to the healthcare intervention under consideration.’ Systematic Reviews aims to provide the platform for such evaluation, and in doing so, contribute to the wider development and improvement of healthcare.”

RD Comment: These developments are relevant to aid agencies who are commissioning synthesis type studies of large fields of work, such as governance and accountability or livelihoods (both done by DFID recently), and to the evaluators considering this work. And…its great to see that this is an Open Access journal. Well done.

The initial issue is worth scanning, especially the Editorial on the topic of Why prospective registration of systematic reviews makes sense See also: Evidence summaries: the evolution of a rapid review approach

There is more material on the use of systematic reviews re development aid interventions on the 3ie website

Assessing the impact of human rights work: Challenges and Choices

The International Council on Human Rights Policy has produced two documents under the above named project(See here for details of the project):

  • No Perfect Measure: Rethinking Evaluationand Assessment of Human Rights Work. Report of a Workshop, January 2012. Contents: Introduction and Context,,A Brief History,,NGO Hesitations, The Shift, Assessing the Impact of Policy Research, Impact Assessment in the context of Advocacy, Impact Assessment in the context of Capacity Building and Development, The Donor perspective, Third-Party Perspectives—Building a bridge, A note on integrating Human Rights Principles into development work, References, Selected Additional Bibliographic Resources
  • Role and Relevance of Human Rights Principles in Impact Assessment: An Approach Paper. July 2011. Contents: Introduction and Context, A Brief History, NGO Hesitations, The Shift, Assessing the Impact of Policy Research, Impact Assessment in the context of Advocacy
    Impact Assessment in the context of Capacity Building and Development
    The Donor perspective, Third-Party Perspectives—Building a bridge
    A note on integrating Human Rights Principles into development work
    References, Selected Additional Bibliographic Resources

PS 14 February 2012: It appears the ICHRP website is not working at present. I have uploaded a copy of the No Perfect Measure paper here

Randomised controlled trial testing the effects of transparency on health care in Uganda

(from the great AidInfo website)

“At aidinfo we conduct research and liaise with aid donors and recipients to build up a case for aid transparency. We want to show that improving and increasing the amount that donors report on their aid contributions can help communities to track aid spending. In turn, donors and governments will be more accountable for their aid spending. It is expected that in this way aid will reach more people on the ground, helping to contribute more in the fight against poverty.

This is all well and good, but it is difficult to prove. Svensson’s work, then, is of great importance to us here.

This Study by Reinikka and Svensson (2005) found that in 1995 only 20 percent of a primary education grant program to rural Uganda actually reached its intended target. This figure rose by a striking 60 percent in 2001 when information was published detailing where this money was going; a full 80 percent of funds reached their intended destination, greatly improving education services in the area.

Björkman and Svensson (2009) followed up on this study with a compelling randomised controlled trial testing the effects of transparency on health care in Uganda. The experiment randomly assigned community health clinics to receive published ‘report cards’ and NGO-organised public meetings on the quality of the clinics’ health care.

The results of this transparency ‘treatment’ rivalled the effects of the best health interventions involving expensive new medicines, equipment, and procedures. Waiting time for care decreased, absenteeism among doctors and nurses plummeted, clinics got cleaner, fewer drugs were stolen, 40-50 percent more children received dietary supplements and vaccines, health services got used more, and, powerfully, 33 percent fewer children died under the age of five. This amounted to 550 saved lives in a small area of Uganda encompassing merely 55,000 households.

This is strong evidence that access to information about services empowers citizens to get better services and saves lives.”

Social Psychology and Evaluation

by Melvin M. Mark PhD (Editor), Stewart I. Donaldson PhD (Editor), Bernadette Campbell PhD (Editor) Guilford Press, May 2011. Available on Google Books.
Book burb “This compelling work brings together leading social psychologists and evaluators to explore the intersection of these two fields and how their theory, practices, and research findings can enhance each other. An ideal professional reference or student text, the book examines how social psychological knowledge can serve as the basis for theory-driven evaluation; facilitate more effective partnerships with stakeholders and policymakers; and help evaluators ask more effective questions about behavior. Also identified are ways in which real-world evaluation findings can identify gaps in social psychological theory and test and improve the validity of social psychological findings–for example, in the areas of cooperation, competition, and intergroup relations. The volume includes a useful glossary of both fields’ terms and offers practical suggestions for fostering cross-fertilization in research, graduate training, and employment opportunities. Each chapter features introductory and concluding comments from the editors.”
%d bloggers like this: