Improving health services through community score cards. A case study from Andhra Pradesh, India

Case study 1, Andhra Pradesh, India : improving health services through community score cards
MISRA, Vivek et al , August 2007

This eight page note summarises the findings, processes, concerns, and lessons learned from a project in Andhra Pradesh – one of six pilot projects aimed at the application of specific social accountability tools in different contexts of service delivery

SYSTEMS CONCEPTS IN EVALUATION: AN EXPERT ANTHOLOGY

Bob Williams and Iraj Imam (eds.)
EdgePress/American Evaluation Association (2007)

Systems Concepts in Evaluation: An Expert Anthology brings
together a wide range of systems concepts, methodologies and
methods and applies them to evaluation settings. This book
addresses the questions:

• What is a systems approach?
• What makes it different from other approaches?
• Why is it relevant to evaluation?

The 14 chapters cover a wide range of systems concepts and methods. Most chapters are case study
based and describe the use of systems concepts in real life evaluations. The approaches and methods
covered include:

• System Dynamics (both quantitative and qualitative)
• Cybernetics and the Viable System Model
• Soft Systems Methodology
• Critical Systems Thinking
• Complex Adaptive Systems

There are also overview chapters that explore the history and diversity of systems approaches and their
potential within the evaluation field. There is a substantial introduction by Gerald Midgley to the key
developments in systems concepts and methods over the past 50 years, and this explores the
implications for evaluation of each of those developments.

Although focused on evaluation, the book is a valuable source for anyone interested in systems concepts,
action research and reflective inquiry. It is useful for both teaching and practice.

Chapters :
Introduction, Iraj Imam, Amy LaGoy, Bob Williams and authors
Systems Thinking for Evaluation, Gerald Midgley
A Systemic Evaluation of an Agricultural Development: A Focus on the Worldview Challenge,
Richard Bawden
System Dynamics-based Computer Simulations and Evaluation, Daniel D Burke
A Cybernetic Evaluation of Organizational Information Systems, Dale Fitch, Ph.D.
Soft Systems in a Hardening World: Evaluating Urban Regeneration, Kate Attenborough
Using Dialectic Soft Systems Methodology as an Ongoing Self-evaluation Process for a
Singapore Railway Service Provider, Dr Boon Hou Tay & Mr Bobby, Kee Pong Lim
Evaluation Based on Critical Systems Heuristics, Martin Reynolds
Human Systems Dynamics: Complexity-based Approach to a Complex Evaluation, Glenda H
Eoyang, Ph.D.
Evaluating Farm and Food Systems in the US, Kenneth A Meter
Systemic Evaluation in the Field of Regional Development, Richard Hummelbrunner
Evaluation in Complex Governance Arenas: the Potential of Large System Action Research,
Danny Burns
Evolutionary and Behavioral Characteristics of Systems, Jay Forrest
Concluding Comments, Iraj Imam, Amy LaGoy, Bob Williams and authors

PUBLICATION AND PURCHASE DETAILS

NAME : Systems Concepts in Evaluation : An Expert Reader
EDITORS : Bob Williams and Iraj Imam
PAGES : 222pp

ISBN 978-0-918528-22-3 paperback
ISBN 978-0-918528-21-6 hardbound

PUBLISHER :

EdgePress/American Evaluation Association (2007)

PURCHASE

Available via Amazon : Hardback only. $US36 plus postage

Pathways for change: monitoring and evaluation

This Brief is an edited summary, prepared by Susanne Turrall, of a paper written by Kath Pasteur
and Susanne Turrall (2006): A synthesis of monitoring and evaluation experience in the Renewable
Natural Resources Research Strategy
.

“Monitoring and evaluation (M&E) plays a central role in ensuring accountability, informing decision- making and, more broadly, facilitating learning. The programmes within the DFID-funded Renewable Natural Resources Research Strategy (RNRRS) have developed some innovative methods of M&E. The RNRRS also saw an evolution in thinking in M&E, moving from a focus on the M&E of research products to a recognition that the context and mechanisms for adoption of research products are equally important, as is the effect on poverty reduction.”
Continue reading “Pathways for change: monitoring and evaluation”

Horizontal Evaluation: Fostering Knowledge Sharing and Program Improvement within a Network

Authors: Thiele, Graham; Devaux, Andre; Velasco, Claudio; Horton, Douglas
American Journal of Evaluation, v28 n4 p493-508 2007

Abstract: Horizontal evaluation combines self-assessment and external evaluation by peers. Papa Andina, a regional network that works to reduce rural poverty in the Andean region by fostering innovation in potato production and marketing, has used horizontal evaluations to improve the work of local project teams and to share knowledge within the network. In a horizontal evaluation workshop, a project team and peers from other organizations independently assess the strengths and weaknesses of a research and development (R&D) approach being developed and then compare the assessments. Project team members formulate recommendations for improving the R&D approach, and peers consider ways to apply it back home. Practical results of horizontal evaluation have included strengthening the R&D approaches being developed, experimenting with their use at new sites, improvements in other areas of work, and strengthened interpersonal relations among network members. (Contains 2 tables.)”

Also available as ILAC Brief: http://www.cgiar-ilac.org/files/publications/briefs/ILAC_Brief13_Horizontal_Evaluation.pdf

And a  Spanish version of the same Brief

Evaluation Of Citizens’ Voice & Accountability – Review Of The Literature & Donor Approaches Report

O’Neill, T., Foresti, M. and Hudson, A. (2007) Evaluation of Citizens’ Voice and Accountability: Review of the Literature and Donor Approaches. London: DFID.

Excerpt

1.3 A core group of DAC partners are collaborating on a joint evaluation of development
aid for strengthening citizens’ voice and the accountability of public institutions. The
Overseas Development Institute has been contracted to undertake the first stage of
this evaluation, which involves the development and piloting of an evaluation
framework. This literature review is the first output from this first phase. It aims to: (i)
review the theoretical debates on voice and accountability and how they relate to
development; (ii) review the different donor approaches to supporting voice and
accountability and identify commonalities and differences across contexts; (iii)
provide an overview of evaluation theory and practice in relation to voice and
accountability interventions; and (iv) identify key knowledge gaps in relation to the
effectiveness of donors in supporting voice and accountability.

1.4 This review has three main sections. Section 2 surveys the academic literature to
present current thinking on what voice and accountability means, how they operate in
practice and how they relate to the achievement of broader development objectives.
Section 3 turns to the donors’ own understanding of voice and accountability as set
out in their relevant policy and guidance documents. It discusses how the donors see
voice and accountability contributing to their poverty reduction mandates and what
approaches they have adopted to strengthen them, including in different contexts.
Section 4 considers the main issues relating to the evaluation of interventions to
strengthen voice and accountability. It first reviews some of the methodological
debates in the theoretical literature before summarising the donors’ own evaluative
efforts in this field, identifying both common findings and key gaps in their
knowledge.

Contents:
1. Introduction 1
2. Voice and Accountability: A view from the literature 3
Voice and accountability: a basic static model 3
Voice and accountability: a complex dynamic reality 5
Relating voice and accountability to other key concepts 6
Voice, accountability and development outcomes 9
3. Voice and accountability: A view from the donors 13
Why do donors want to strengthen voice and accountability? 13
What strategies do donors adopt for strengthening voice and accountability? 18
Do donor approaches take account of context? 25
4. Evaluating voice and accountability 29
Approaches and frameworks for evaluating voice and accountability interventions 29
What have donors learnt about their effectiveness? 36
5. Conclusions 47
Annexes 49
References 53

Negotiated Learning: Collaborative Monitoring for Forest Resource Management

(via Pelican email list)

Dear all

Niels has asked me to make you aware of a new publication that some
‘Pelican-ers’ might find relevant.

I have edited a book on how learning and monitoring can become better
‘friends’ than is currently usually the case. The book comes off the press
tomorrow. The full reference: Guijt, Irene, ed. (2007). Negotiated
Learning: Collaborative Monitoring for Forest Resource Management
.
Washington DC, Resources for the Future/Center for International Forestry
Research. Although the cases in the book focus on natural resource (forest)
management, the issues about how to create genuine learning through the
construction, negotiation and implementation of a monitoring process will
have much wider relevance.

Full details on how to obtain the book can be found at :
http://www.rff.org/rff/RFF_Press/CustomBookPages/Negotiated-Learning.cfm ,
where the book is described as follows :

“The first book to critically examine how monitoring can be an effective
tool in participatory resource management, Negotiated Learning draws on the
first-hand experiences of researchers and development professionals in
eleven countries in Africa, Asia, and South America. Collective monitoring
shifts the emphasis of development and conservation professionals from
externally defined programs to a locally relevant process. It focuses on
community participation in the selection of the indicators to be monitored
as well as in the learning and application of knowledge from the data that
are collected. As with other aspects of collaborative management,
collaborative monitoring emphasizes building local capacity so that
communities can gradually assume full responsibility for the management of
their resources. The cases in Negotiated Learning highlight best practices
but stress that collaborative monitoring is a relatively new area of theory
and practice. The cases focus on four themes: the
challenge of data-driven monitoring in forest systems that supply multiple
products and serve diverse functions and stakeholders; the importance of
building upon existing dialogue and learning systems; the need to better
understand social and political differences among local users and other
stakeholders; and the need to ensure the continuing adaptiveness of
monitoring systems.”

PS: Links to full texts of some chapters

Chap8_McDougall.pdf

Chapter10_Kamoto.pdf

Chap13_Conclusion.pdf

Greetings,

irene

Learning by Design

Bredeweg 31, 6668 AR Randwijk, The Netherlands
Tel. (0031) 488-491880 Fax. (0031) 488-491844

Stories of Significance: Redefining Change – An assortment of community voices and articulations

(via the AIDS Alliance India website)

“A report based on an evaluation of a programme on “Community Driven Approaches to Address the Feminisation of HIV/AIDS in India” by means of the ‘Most Significant Change’ Technique: Using the participatory evaluation technique, Most Significant Change (MSC), this report derives its findings from the MSC evaluation of work from Alliance India’s recently concluded DFID -supported programme on community-driven approaches for addressing the feminisation of HIV/AIDS in India. The MSC technique is a participatory monitoring tool based on gathering and analysing stories of important or significant changes from a cross-section of target groups, to provide a richer picture of the impact of programme interventions. This document is written in a lucid manner and contains many new insights for the purposes of learning for future programming in relation to sexual and reproductive health and HIV/AIDS integration and HIV/AIDS programming for women.”

(English) Stories of Significance Redefining Change.pdf

For information about how to get a printed copy of this report, please contact: India HIV/AIDS Alliance info@allianceindia.org

The utilisation of evaluations.

Chapter 3:ALNAP Review of Humanitarian Action in 2005

Peta Sandison

This chapter draws upon a literature review, four case studies of evaluation
utilisation volunteered by CAFOD, MSF(H), OCHA and USAID, semi-structured
interviews with 45 evaluators, evaluation managers and evaluation ‘users’, a
review of 30 sets of terms of reference, and an electronic survey sent to ALNAP
Observer and Full Members (19 evaluation managers and 27 evaluators
responded to the survey).

The researcher was supported by an advisory group composed of representatives
from the British Red Cross Society, CARE International, MSF(H), the Netherlands
Ministry of Foreign Affairs (Policy and Operations Evaluation Department), OCHA
and ODI, plus an independent evaluation consultant and the ALNAP Secretariat.

Results-based Management in CIDA

[from CIDA website]
CIDA uses results-based management (RBM) to better manage Canada’s international development programming from start: investment or project planning and implementation, to finish: evaluations, reporting and integrating lessons learned into future programming. This page provides a set of comprehensive guides, linked to the Results-Based Management Policy Statement 2008.

Note: The Performance Management Division of CIDA’s Strategic Policy and Performance Branch is updating results-based management (RBM) guides and supporting documents according to CIDA’s 2008 RBM policy. If you need access to one of the documents below, or if you have other requests regarding RBM at CIDA, please contact the Performance Management Division.

  • Annex 4 of the Guide for Preparing a Country Development Programming Framework: The Performance Measurement Framework
  • A Results Approach To Developing the Implementation Plan (March 2001)
  • RBM Handbook on Developing Results Chain (December 2000)
  • Guide to Project Performance Reporting: For Canadian Partners and Executing Agencies (May 1999)
  • Results-based Management in CIDA: An Introductory Guide to the Concepts and Principles (January 1999)
  • The Logical Framework: Making It Results-Oriented (November 1997)

Monitoring government policies A toolkit for civil society organisations in Africa

(identified via Source)

The toolkit was produced by AFOD, Christian Aid, Trocaire

This project was started by the three agencies with a view to supporting partner
organisations, particularly church-based organisations, to hold their governments to
account for the consequences of their policies. This toolkit specifically targets African

partners, seeking to share the struggles and successes of partners already monitoring

government policies with those that are new to this work.
The development of this toolkit has been an in-depth process. Two consultants were
commissioned to research and write the toolkit. They were supported by a reference group
composed of staff from CAFOD, Christian Aid and Trócaire and partner organisations with
experience in policy monitoring. The draft toolkit was piloted with partners in workshops
in Malawi, Sierra Leone and Ethiopia. Comments from the reference group and the
workshops contributed to this final version of the toolkit.

Contents

INTRODUCTION  1
CHAPTER ONE: GETTING STARTED
1.1  Core concepts in policy monitoring 5
1.2  Identifying problems, causes and solutions 8
1.3  Beginning to develop a monitoring approach 10
Interaction  13
CHAPTER TWO: CHOOSING POLICIES AND COLLECTING INFORMATION
2.1  Different kinds of policies 15
2.2  Which policies to monitor 18
2.3  Access to policy information  22
2.4  Collecting policy documents 24
Interaction   27
CHAPTER THREE: IDENTIFYING POLICY STAKEHOLDERS
3.1  Stakeholders of government policies 29
3.2  Target audiences and partners  31
3.3  Monitoring by a network of stakeholders 34
Interaction  37
CHAPTER FOUR: LOOKING INTO A POLICY AND SETTING YOUR FOCUS
4.1  Analysing the content of a policy 39
4.2  Defining your monitoring objectives 42
4.3  What kind of evidence do you need? 44
4.4 Choosing indicators 47
4.5  Establishing a baseline 50
Interaction  52
CHAPTER FIVE:ANALYSING POLICY BUDGETS
5.1  Budget basics  55
5.2  Resources for policy implementation 59
5.3 Budget analysis 61
5.4 Interaction  67

CHAPTER SIX: GATHERING EVIDENCE ON POLICY IMPLEMENTATION
6.1 Interviews  69
6.2 Surveys 72
6.3  Analysing survey data and other coded information 77
6.4  Workshops, focus group discussions and observation 84
Interaction  89
CONCLUSION: USING POLICY EVIDENCE TO ADVOCATE FOR CHANGE
Interaction  98
RESOURCES AND CONTACTS 100