Special Issue on Systematic Reviews – J. of Development Effectiveness

Volume 4, Issue 3, 2012

  • Why do we care about evidence synthesis? An introduction to the special issue on systematic reviews
  • How to do a good systematic review of effects in international development: a tool kit
    • Hugh Waddington, Howard White, Birte Snilstveit, Jorge Garcia Hombrados, Martina Vojtkova, Philip Davies, Ami Bhavsar, John Eyers, Tracey Perez Koehlmoos, Mark Petticrew, Jeffrey C. Valentine & Peter Tugwell  pages 359-387Download full text
  • Systematic reviews: from ‘bare bones’ reviews to policy relevance
  • Narrative approaches to systematic review and synthesis of evidence for international development policy and practice
  • Purity or pragmatism? Reflecting on the use of systematic review methodology in development
  • The benefits and challenges of using systematic reviews in international development research
    • Richard Mallett, Jessica Hagen-Zanker, Rachel Slater & Maren Duvendack pages 445-455 Download full text
  • Assessing ‘what works’ in international development: meta-analysis for sophisticated dummies
    • Maren Duvendack, Jorge Garcia Hombrados, Richard Palmer-Jones & Hugh Waddington pages 456-471Download full text
  • The impact of daycare programmes on child health, nutrition and development in developing countries: a systematic review

Approches et pratiques en évaluation de programmes

Nuvelle édition revue et augmentée, Christian Dagenais, Valéry Ridde, 480 pages • août 2012. University of Montreal press

EN LIBRAIRIE À COMPTER DU 20 SEPTEMBRE 2012

Tous les chapitres de cette nouvelle édition ont été écrits par des pédagogues, des enseignants universitaires et des formateurs rompus depuis de longues années à l’exercice du partage de connaissances en évaluation de programmes, tout en mettant l’accent sur la pratique plutôt que sur la théorie. Nous avons ajouté quatre nouveaux chapitres, car les connaissances en évaluation évoluent constamment, sur la stratégie de l’étude de cas, l’évaluation économique, les approches participatives ou encore l’approche dite réaliste. Il manquait dans la première édition des exemples relatifs à l’usage des méthodes mixtes, décrites dans la première partie. Deux nouveaux chapitres viennent donc combler cette lacune.

Un défi essentiel auquel fait face tout enseignant en évaluation est lié à la maîtrise de la grande diversité des approches évaluatives et des types d’évaluation. La seconde partie de l’ouvrage présente quelques études de cas choisies pour montrer clairement comment les concepts qui auront été exposés sont employés dans la pratique. Ces chapitres recouvrent plusieurs domaines disciplinaires et proposent divers exemples de pratiques évaluatives.

Valéry Ridde, professeur en santé mondiale, et Christian Dagenais, professeur en psychologie, tous deux à l’Université de Montréal, enseignent et pratiquent l’évaluation de programmes au Québec, en Haïti et en Afrique.

Avec les textes d’Aristide Bado, Michael Bamberger, Murielle Bauchet, Diane Berthelette, Pierre Blaise, François Bowen, François Chagnon, Nadia Cunden, Christian Dagenais, Pierre-Marc Daigneault, Luc Desnoyers, Didier Dupont, Julie Dutil, Françoise Fortin, Pierre Fournier, Marie Gervais, Anne Guichard, Robert R. Haccoun, Janie Houle, Françoise Jabot, Steve Jacob, Kadidiatou Kadio, Seni Kouanda, Francine LaBossière, Isabelle Marcoux, Pierre McDuff, Miri Levin-Rozalis, Frédéric Nault-Brière, Bernard Perret, Pierre Pluye, Nancy L. Porteous, Michael Quinn Patton, Valéry Ridde, Émilie Robert, Patricia Rogers, Christine Rothmayr, Jim Rugh, Caroline Tourigny, Josefien Van Olmen, Sophie Witter, Maurice Yameogo et Robert K. Yin

A move to more systematic and transparent approaches in qualitative evidence synthesis

An update on a review of published papers.
By Karin Hannes and Kirsten Macaitis  Qualitative Research 2012 12: 402 originally published online 11 May 2012

Abstract

In 2007, the journal Qualitative Research published a review on qualitative evidence syntheses conducted between 1988 and 2004. It reported on the lack of explicit detail regarding methods for searching, appraisal and synthesis, and a lack of emerging consensus on these issues. We present an update of this review for the period 2005–8. Not only has the amount of published qualitative evidence syntheses doubled, but authors have also become more transparent about their searching and critical appraisal procedures. Nevertheless, for the synthesis component of the qualitative reviews, a black box remains between what people claim to use as a synthesis approach and what is actually done in practice. A detailed evaluation of how well authors master their chosen approach could provide important information for developers of particular methods, who seem to succeed in playing the game according to the rules. Clear methodological instructions need to be developed to assist others in applying these synthesis methods.

Working with Assumptions in International Development Program Evaluation

By Nkwake, Apollo M., with a Foreword by Michael Bamberger.  2013, 2013, XXI, 184 p. 14 illus., 7 in color. Published by Springer and available on Amazon

Publisher description

“Provides tools for understanding effective development programming and quality program evaluations Contains workshop materials for graduate students and in-service training for development evaluators The author brings together more than 12 years of experience in evaluation of international development programs

Regardless of geography or goal, development programs and policies are fueled by a complex network of implicit ideas. Stakeholders may hold assumptions about purposes, outcomes, methodology, and the value of project evaluation and evaluators—which may or may not be shared by the evaluators. Even when all participants share goals, failure to recognize and articulate assumptions can impede clarity and derail progress.

Working with Assumptions in International Development Program Evaluation probes their crucial role in planning, and their contributions in driving, global projects involving long-term change. Drawing on his extensive experience in the field, the author offers elegant logic and instructive examples to relate assumptions to the complexities of program design and implementation, particularly in weighing their outcomes. The book emphasizes clarity of purpose, respect among collaborators, and collaboration among team members who might rarely or never meet otherwise. Importantly, the book is a theoretical and practical volume that:

·          Introduces the multiple layers of assumptions on which global interventions are based.

·          Explores various approaches to the evaluation of complex interventions, with their underlying assumptions.

·          Identifies ten basic types of assumptions and their implications for program development and evaluation.

·          Provides examples of assumptions influencing design, implementation, and evaluation of development projects.

·          Offers guidelines in identifying, explicating, and evaluating assumptions

A first-of-its-kind resource, Working with Assumptions in International Development Program Evaluation opens out the processes of planning, implementation, and assessment for professionals in global development, including practitioners, development economists, global development program designers, and nonprofit personnel.”

Rick Davies comment: Looks potentially useful, but VERY expensive at £85.50 Few individuals will buy it but organisations might do so. Ideally the author would make a cheaper paperback version available. And Amazaon should provide a “Look inside this book” option, to help people decide if spending £85.50 would be worthwhile. PS: I think the publishers, and maybe the author, would fail the marshmellow test

Rick Davies postcript: The Foreword, Preface and Contents page of the book is available as a pdf, here on the Springer website.

See also:


What Causes What & Hypothesis testing: Truth and Evidence

Two very useful chapters in Denise Cummins (2012) “Good Thinking“, Cambridge University Press

Cummins is a professor of psychology and philosophy, both of which she brings to bear in this great book. Read an interview with author here

Contents include:

1. Introduction
2. Rational choice: choosing what is most likely to give you what you want
3. Game theory: when you’re not the only one choosing
4. Moral decision-making: how we tell right from wrong
5. The game of logic
6. What causes what?
7. Hypothesis testing: truth and evidence
8. Problem solving: another way of getting what you want
9. Analogy: this is like that.

New Directions for Evaluation: Promoting Valuation in the Public Interest: Informing Policies for Judging Value in Evaluation

Spring 2012, Volume 2012, Issue 133, Pages 1–129 Buy here

Editor’s Notes – George Julnes

  1. Editor’s notes (pages 1–2) Abstract PDF(22K)

Research Articles

  1. Managing valuation (pages 3–15)  George JulnesAbstract PDF(77K) References
    >
  2. The logic of valuing (pages 17–28)  Michael Scriven Abstract  PDF(63K) References
  3. The evaluator’s role in valuing: Who and with whom (pages 29–41)Marvin C. Alkin, Anne T. Vo and Christina A. Christie Abstract PDF(74K) References
  4. Step arounds for common pitfalls when valuing resources used versus resources produced (pages 43–52)Brian T. Yates Abstract PDF(60K) References
  5. When one must go: The Canadian experience with strategic review and judging program value (pages 65–75)François Dumaine Abstract  PDF(59K) References
  6. Valuing, evaluation methods, and the politicization of the evaluation process (pages 77–83)Eleanor Chelimsky Abstract PDF(46K) References
  7. Valuation and the American Evaluation Association: Helping 100 flowers bloom, or at least be understood? (pages 85–90)Michael Morris Abstract PDF(40K) References

Integrated Monitoring: A Practical Manual for Organisations That Want to Achieve Results

Written by Sonia Herrero, InProgress, Berlin, April 2012. 43 pages Available as pdf

“The aim of this manual is to help those working in the non-profit sector — non-governmental organisations (NGOs) and other civil society organisations (CSOs) — and the donors which fund them, to observe more accurately what they are achieving through their efforts and to ensure  that they make a positive difference in the lives of the people they want to help. Our interest in writing this guide has grown out of the desire to help bring some conceptual clarity to
the concepts of monitoring and to determine ways in which they can be harnessed and used more effectively by non-profit practitioners.

The goal is to help organisations build monitoring and evaluation into all your project management efforts. We want to demystify the monitoring process and make it as simple and accessible as possible. We have made a conscious choice to avoid technical language, and instead use images and analogies that are easier to grasp. There is a glossary at the end of the manual which contains the definitions of any terms you may be unfamiliar with. This manual is organised into two parts. The first section  covers the ‘what’ and ‘why’ of monitoring and  evaluation; the second addresses how to do it.”

These materials may be freely used and copied by non-profit organisations for capacity building purposes, provided that inProgress and authorship are acknowledged. They may not be reproduced for commercial gain.

Contents
Introduction
I. KEY ASPECTS OF MONITORING
1. What is Monitoring?
2. Why Do We Monitor and For Whom?
3. Who is Involved?
4. How Does it Work?
5. When Do We Monitor?
5. What Do We Monitor?
5.1 Monitoring What We DoII. HOW DO WE MONITOR?
1. Steps for Setting Up a Monitoring S   2. How to Monitor the Process and the Outputs
3. How to Monitor the Achievemen 3.1 Define Results/Outcomes
3.2 Define Indicators for Results
4. Prepare a Detailed Monitoring Plan
5. Identify Sources of Information
6. Data Collection
6.1 Tools for Data Compilation
7. Reflection and Analysis
7.1 Documenting and Sharing
8. Learning and Reviewing
8.1 Learning
8.2 Reviewing
9. Evaluation
Conclusion
Glossary
References

Magenta Book – HM Treasury guidance on evaluation for Central Government (UK)

27 April 2011

“The Magenta Book is HM Treasury guidance on evaluation for Central Government, but will also be useful for all policy makers, including in local government, charities and the voluntary sectors. It sets out the key issues to consider when designing and managing evaluations, and the presentation and interpretation of evaluation results. It describes why thinking about evaluation before and during the policy design phase can help to improve the quality of evaluation results without needing to hinder the policy process.

The book is divided into two parts.

Part A is designed for policy makers. It sets out what evaluation is, and what the benefits of good evaluation are. It explains in simple terms the requirements for good evaluation, and some straightforward steps that policy makers can take to make a good evaluation of their intervention more feasible.

Part B is more technical, and is aimed at analysts and interested policy makers. It discusses in more detail the key steps to follow when planning and undertaking an evaluation and how to answer evaluation research questions using different evaluation research designs. It also discusses approaches to the interpretation and assimilation of evaluation evidence.

The Magenta Book will be supported by a wide range of forthcoming supplementary guidance containing more detailed guidance on particular issues, such as statistical analysis and sampling. Until these are available please refer to the relevant chapters of the original Magenta Book.

The Magenta Book is available for download in PDF format:

An introduction to systematic reviews

Book publishedin March 2012, by Sage. Authors: David Gough, Sandy Oliver, James Thomas

Read Chapter One pdf: Introducing systematic reviews

Contents:

1. Introducing Systematic Reviews David Gough, Sandy Oliver and James Thomas
2. Stakeholder Perspectives and Participation in Reviews Rebecca Rees and Sandy Oliver
3. Commonality and Diversity in Reviews David Gough and James Thomas
4. Getting Started with a Review Sandy Oliver, Kelly Dickson, and Mark Newman
5. Information Management in Reviews Jeff Brunton and James Thomas
6. Finding Relevant Studies Ginny Brunton, Claire Stansfield & James Thomas
7. Describing and Analysing Studies Sandy Oliver and Katy Sutcliffe
8. Quality and Relevance Appraisal Angela Harden and David Gough
9. Synthesis: Combining results systematically and appropriately James Thomas, Angela Harden and Mark Newman
10. Making a Difference with Systematic Reviews Ruth Stewart and Sandy Oliver
11. Moving Forward David Gough, Sandy Oliver and James Thomas

“Six Years of Lessons Learned in Monitoring and Evaluating Online Discussion Forums”

by Megan Avila, Kavitha Nallathambi, Catherine Richey, Lisa Mwaikambo– in Knowledge Management & E-Learning: An International Journal (KM&EL), Vol 3, No 4 (2011)

….which looks at how to evaluate virtual discussion forums held on the IBP (Implementing Best Practices in Reproductive Health) Knowledge Gateway – a platform for global health practitioners to exchange evidence-based information and knowledge to inform practice. Available as pdf  Found courtesy of Yaso Kunaratnam, IDS

Abstract: “This paper presents the plan for evaluating virtual discussion forums held on the Implementing Best Practices in Reproductive Health (IBP) Knowledge Gateway, and its evolution over six years. Since 2005, the World Health Organization Department of Reproductive Health and Research (WHO/RHR), the Knowledge for Health (K4Health) Project based at Johns Hopkins Bloomberg School of Public Health’s Center for Communication Programs (JHU?CCP), and partners of the IBP Initiative have supported more than 50 virtual discussion forums on the IBP Knowledge Gateway. These discussions have provided global health practitioners with a platform to exchange evidence-based information and knowledge with colleagues working around the world. In this paper, the authors discuss challenges related to evaluating virtual discussions and present their evaluation plan for virtual discussions. The evaluation plan included the following three stages: (I) determining value of the discussion forums, (II) in-depth exploration of the data, and (III) reflection and next steps and was guided by the “Conceptual Framework for Monitoring and Evaluating Health Information Products and Services” which was published as part of the Guide to Monitoring and Evaluation of Health Information Products and Services. An analysis of data from 26 forums is presented and discussed in light of this framework. The paper also includes next steps for improving the evaluation of future virtual discussions.”

 

%d bloggers like this: