BEHIND THE SCENES: MANAGING AND CONDUCTING LARGE SCALE IMPACT EVALUATIONS IN COLOMBIA

by Bertha Briceño, Water and Sanitation Program, World Bank; Laura Cuesta, University of Wisconsin-Madison, Orazio Attanasio, University College London
December 2011, 3ie Working Paper 14, available as pdf

“Abstract: As more resources are being allocated to impact evaluation of development programs,the need to map out the utilization and influence of evaluations has been increasingly highlighted. This paper aims at filling this gap by describing and discussing experiences from four large impact evaluations in Colombia on a case study-basis. On the basis of (1) learning from our prior experience in both managing and conducting impact evaluations, (2) desk review of available documentation from the Monitoring & Evaluation system, and (3) structured interviews with government actors, evaluators and program managers, we benchmark each evaluation against eleven standards of quality. From this benchmarking exercise, we derive five key lessons for conducting high quality and influential impact evaluations: (1) investing in the preparation of good terms of reference and identification of evaluation questions; (2) choosing the best methodological approach to address the evaluation questions; (3) adopting mechanisms to ensure evaluation quality; (4) laying out the incentives for involved parties in order to foster evaluation buy-in; and (5) carrying out a plan for quality dissemination.”

Dealing with complexity through Planning, Monitoring & Evaluation

Mid-term results of a collective action research process.
Authors: Jan Van Ongevalle, Anneke Maarse, Cristien Temmink, Eugenia Boutylkova and Huib Huyse. Published January 2012
Praxis Paper 26, available as pdf

(Text from INTRAC website) “Written by staff from PSO and HIVA, this paper shares the first results of an ongoing collaborative action research in which ten development organisations explored different Planning, Monitoring and Evaluation (PME) approaches with the aim of dealing more effectively with complex processes of social change.

This paper may be of interest as:
1) It illustrates a practical example of action research whereby the organisations themselves are becoming the researchers.
2) Unpacking the main characteristics of complexity, the paper uses an analytic framework of four questions to assess the effectiveness of a PME approach in dealing with complex social change.
3) An overview is given of how various organisations implemented different PME approaches (e.g. outcome mapping, most significant change, client satisfaction instruments) in order to deal with complex change.
4) The paper outlines the meaning and the importance of a balanced PME approach, including its agenda, its underlying principles and values, its methods and tools and the way it is implemented in a particular context.”

World Bank – Raising the Bar on Transparency, Accountability and Openness

Blog posting by Hannah George on Thu, 02/16/2012 – 18:01 Found via @TimShorten

“The World Bank has taken landmark steps to make information accessible to the public and globally promote transparency and accountability, according to the first annual report on the World Bank’s Access to Information (AI) Policy.[20/02/2012 – links is not working – here is a link to a related doc, World Bank Policy on Access to Information Progress Report : January through March 2011]

“The World Bank’s Access to Information Policy continues to set the standard for other institutions to strive for,” said Chad Dobson, executive director of the Bank Information CenterPublish What You Fund recently rated the Bank “best performer” in terms of aid transparency out of 58 donors for the second year in a row.  Furthermore, the Center for Global Development and Brookings ranked the International Development Association (the World Bank’s Fund for the Poorest) as a top donor in transparency and learning in its 2011 Quality of Official Development Assistance Assessment (QuODA).

“Unleashing the potential of AusAID’s performance data”

A posting on the Development Policy Blog by Stephen Howes, on 15 february 2012.

This blog examines AusAID’s Office of Development Effectiveness latest annual report released just before Christmas 2010, which was published in two parts, one providing an international comparative perspective (and summarized in this blog), the other drawing on and assessing internal performance reporting. In this blog the author continues his analysis of the  “internal assessment” report.

He points out how the report data shows that poor performance is a much more significant problem than outright fraud. He also examines the results of ODE’s spotchecks on the quality of the self-assessment ratings. There is much else there in the blog that is also of interest.

Of special interest are the concluding paras: “This systematic collation of project self-ratings and the regular use of spot checks is best practice for any aid agency, and something AusAID should take pride in. The problem is that, as illustrated above, the reporting and analysis of these two rich sources of data is at the current time hardly even scratching the surface of their potential.

One way forward would be for ODE or some other part of AusAID to undertake and publish a more comprehensive report and analysis of this data. That would be a good idea, both to improve aid effectiveness and to enhance accountability.

But I have another suggestion. If the data is made public, we can all do our own analysis. This would tremendously enhance the debate in Australia on aid effectiveness, and take the attention away from red-herrings such as fraud towards real challenges such as  value-for-money.

AusAID’s newly-released Transparency Charter[pdf] commits the organization to releasing publishing “detailed information on AusAID’s work” including “the results of Australian aid activities and our evaluations and research.”  The annual release of both the self-ratings and the spot-checks would be a simple step, but one which would go a long way to fulfilling  the Charter’s commitments.”

PS: Readers may be interested in similar data made available by DFID in recent years. See Do we need a minimum level of failure blog posting

 

Conference: Measuring Impact of Higher Education for Development

From: Monday 19th March 2012 to Tuesday 20th March 2012

Venue:  Birkbeck College, Malet Street, London

Organisers: London International Development Centre (LIDC); Association of Commonwealth Universities (ACU)

Background: Higher education for international development has been, in recent years, a neglected area relative to other educational interventions. Yet higher education (HE) is necessary for the attainment of Millennium Development Goals (MDGs) and for economic development in low and middle income countries.

There is a long history of development assistance interventions in HE to support development goals, directed at strengthening individual, organisational and institutional capacity. These have included scholarship programmes as well as support to specific universities and university networks in low and middle income countries, and support to academic research and training partnerships.
However, there has been little comparison of these different interventions in terms of their international development impact. This limits our understanding of “what works” in HE interventions for development, and our ability to invest effectively in future.
The aim of this two-day international conference is to examine the current status of impact evaluation for HE interventions and to identify research gaps and needs for the future. The conference will focus on three issues:
  • What has been, and should be, the development intention of HE interventions?
  • How should development impacts be measured?
  • What is our experience with measurement methods and tools to date, where are the gaps and what research priorities emerge?

The programme will be posted online soon.

Who should attend:

The conference will bring together experts from three research sectors: higher education, international development and impact evaluation from academia, think tanks, government agencies and civil society organisations. PhD students are welcome if their research is relevant to the theme of the conference.

Registration is open between 2 February and 5 March 2012.
To register, please fill in and return the registration form.
Attendance is free of charge.

Making systematic reviews work for international development research

ODI Discussion paper, January 2012 4 pages

Authors: Jessica Hagen-Zanker, Maren Duvendack, Richard Mallett and Rachel Slater with Samuel Carpenter and Mathieu Tromme

This briefing paper reflects upon the use of systematic reviews in international development research. It attempts to identify where a systematic review approach adds value to development research and where it becomes problematic.

The question of ‘what works’ in international development policy and practice is becoming ever more important against a backdrop of accountability and austerity. In order to answer this question, there has been a surge of interest in ‘evidence-informed policy making’.

Systematic reviews are a rigorous and transparent form of literature review, and are increasingly considered a key tool for evidence-informed policy making. Subsequently, a number of donors – most notably the UK Department for International Development (DFID) and AusAid – are focusing attention and resources on testing the appropriateness of systematic reviews in assessing the impacts of development and humanitarian interventions.

This briefing paper reflects upon the use of systematic reviews in international development research and argues:

  • Using systematic review principles can help researchers improve the rigour and breadth of literature reviews
  • Conducting a full systematic review is a resource intensive process and involves a number of practical challenges
  • Systematic reviews should be viewed as a means to finding a robust and sensible answer to a focused research question

3ie have subsequently provided this Commentary

There has also been a discussion on ODI Blog Posts, 27 January 2012

See also the DFID Nov 2011 background page on “Systematic Reviews in International Development : An Initiative to Strengthen Evidence-Informed Policy Making

 

What shapes research impact on policy?

…Understanding research uptake in sexual and reproductive health policy processes in resource poor contexts

Andy Sumner, Jo Crichton, Sally Theobald, Eliya Zulu and Justin Parkhurst. Health Research Policy and Systems 2011, 9(Suppl 1):S3 Published: 16 June 2011

Abstract “Assessing the impact that research evidence has on policy is complex. It involves consideration of conceptual issues of what determines research impact and policy change. There are also a range of methodological issues relating to the question of attribution and the counter-factual. The dynamics of SRH, HIV and AIDS, like many policy arenas, are partly generic and partly issue- and context-specific. Against this background, this article reviews some of the main conceptualisations of research impact on policy, including generic determinants of research impact identified across a range of settings, as well as the specificities of SRH in particular. We find that there is scope for greater cross-fertilisation of concepts, models and experiences between public health researchers and political scientists working in international development and research impact evaluation. We identify aspects of the policy landscape and drivers of policy change commonly occurring across multiple sectors and studies to create a framework that researchers can use to examine the influences on research uptake in specific settings, in order to guide attempts to ensure uptake of their findings. This framework has the advantage that distinguishes between pre-existing factors influencing uptake and the ways in which researchers can actively influence the policy landscape and promote research uptake through their policy engagement actions and strategies. We apply this framework to examples from the case study papers in this supplement, with specific discussion about the dynamics of SRH policy processes in resource poor contexts. We conclude by highlighting the need for continued multi-sectoral work on understanding and measuring research uptake and for prospective approaches to receive greater attention from policy analysts.”

Conference: Evaluation in a Complex World -Balancing Theory and Practice

April 29- May 1, 2012 (Sunday-Tuesday)
Seaview Resort, Galloway, NJ, USA. (http://www.dolce-seaview-hotel.com)

Organised by the Eastern Evaluation Research Society, a Regional Affiliate of the American Evaluation Association. Flyer available here

Keynote Speaker: Jennifer Greene, University of Illinois and President of AEA Featured Speakers: Eleanor Chelimsky, U.S. Government Accountability Office and former AEA President Rodney Hopson, Dusquesne University and incoming President of AEA

Sunday Afternoon Pre-Conference Workshops and Session: Meta Analysis Ning Rui, Research for Better Schools

Focus Group Research: Planning and Implementation Michelle Revels, ICF International

Career Talk with the Experts (NEW!): An unstructured conversation about your evaluation career This session is free to participants! Sunday Evening Interactive & Networking Session: John Kelley, Villanova University Concurrent Sessions Featuring: Skill Building Sessions, Individual Presentations & Panel Sessions

A full conference program will be posted at (www.eers.org) by Mid February 2012.

New journal on Systematic Reviews

from BioMed Central Blog, thanks to tweet by @bengoldacre

“Systematic Reviews, a new journal in the BioMed Central portfolio, launches today. The journal, headed by Editors-in-Chief David Moher, Lesley Stewart and Paul Shekelle, aims to encompass all aspects of the design, conduct and reporting of systematic reviews.

As the first open access journal to focus on systematic reviews and associated literature, Systematic Reviews aims to publish high quality systematic review products including systematic review protocols, systematic reviews related to a very broad definition of health, rapid reviews, updates of already completed systematic reviews, and methods research related to the science of systematic reviews, such as decision modeling. The journal also aims to ensure that the results of all well-conducted systematic reviews are published, regardless of their outcome.

The journal supports innovation and transparency in the reporting of systematic reviews. In a thematic series published upon launch, six articles explore the importance of registering systematic reviews and review protocols, including a commentary from the Chief Medical Officer for the UK, Prof Dame Sally Davies, who writes on the value of registering  reviews from a funder’s perspective.

With the launch of Systematic Reviews, the Editors-in-Chief note that ‘The explosion in the number of systematic reviews being published across a range of disciplines  demonstrates widespread interest in a broad range of systematic review activities and products. Beyond the Cochrane Library there is no journal singularly devoted to all things systematic review. We hope Systematic Reviews will become that journal and that its open access status will attract authors and readers globally.’

The journal will provide an important addition to medical research, in promoting systematic reviews as an important means of analysing and assessing trial outcomes, and developing responses to failing approaches in healthcare treatments and research. The journal has already garnered support from the medical community, with Dr Ben Goldacre, author, journalist and research fellow at the London School of Hygiene and Tropical Medicine stating: ‘Medicine cannot depend on meandering essays, presenting an incomplete or inconsistent view of the scientific literature: to understand whether treatments work or not, we need complete summaries – collating all the evidence – using clearly explained methods to track it down. Systematic reviews are the key, and yet this tool is surprisingly new in medical science. At a time of rising concern about biased under-reporting of negative results, it’s good to see a new open access journal devoted to improving the science of systematic reviews.’

As the Editors-in-Chief note in their launch editorial, ‘individual studies are seldom sufficient to drive change. They are often too small to reach reliable conclusions, and for fair evaluation, it is important to look at the totality (or at least an unbiased sample of the totality) of evidence in favour of, against, or neutral to the healthcare intervention under consideration.’ Systematic Reviews aims to provide the platform for such evaluation, and in doing so, contribute to the wider development and improvement of healthcare.”

RD Comment: These developments are relevant to aid agencies who are commissioning synthesis type studies of large fields of work, such as governance and accountability or livelihoods (both done by DFID recently), and to the evaluators considering this work. And…its great to see that this is an Open Access journal. Well done.

The initial issue is worth scanning, especially the Editorial on the topic of Why prospective registration of systematic reviews makes sense See also: Evidence summaries: the evolution of a rapid review approach

There is more material on the use of systematic reviews re development aid interventions on the 3ie website

Assessing the impact of human rights work: Challenges and Choices

The International Council on Human Rights Policy has produced two documents under the above named project(See here for details of the project):

  • No Perfect Measure: Rethinking Evaluationand Assessment of Human Rights Work. Report of a Workshop, January 2012. Contents: Introduction and Context,,A Brief History,,NGO Hesitations, The Shift, Assessing the Impact of Policy Research, Impact Assessment in the context of Advocacy, Impact Assessment in the context of Capacity Building and Development, The Donor perspective, Third-Party Perspectives—Building a bridge, A note on integrating Human Rights Principles into development work, References, Selected Additional Bibliographic Resources
  • Role and Relevance of Human Rights Principles in Impact Assessment: An Approach Paper. July 2011. Contents: Introduction and Context, A Brief History, NGO Hesitations, The Shift, Assessing the Impact of Policy Research, Impact Assessment in the context of Advocacy
    Impact Assessment in the context of Capacity Building and Development
    The Donor perspective, Third-Party Perspectives—Building a bridge
    A note on integrating Human Rights Principles into development work
    References, Selected Additional Bibliographic Resources

PS 14 February 2012: It appears the ICHRP website is not working at present. I have uploaded a copy of the No Perfect Measure paper here