Monitoring and Evaluation in Health and Social Development: Interpretive and Social Development Perspectives

Edited by Stephen Bell and Peter Aggleton. Routledge 2016. View on Google Books

interpretive researchers thus attempt to understand phenomena through accessing the meanings participants assign to them

“...interpretive and ethnographic approaches are side-lined in much contemporary evaluation work and current monitoring and evaluation practice remains heavily influenced by more positivist approaches

attribution is not the only purpose of impact evaluation

Lack of familiarity with qualitative approaches by programme staff and donor agencies also influences the preferences for for quantitative methods in monitoring and evaluation work

Contents

1. Interpretive and Ethnographic Perspectives – Alternative Approaches to Monitoring and Evaluation Practice

2. The Political Economy of Evidence: Personal Reflections on the Value of the Interpretive Tradition and its Methods

3. Measurement, Modification and Transferability: Evidential Challenges in the Evaluation of Complex Interventions

4. What Really Works? Understanding the Role of ‘Local Knowledges’ in the Monitoring and Evaluation of a Maternal, Newborn and Child Health Project in Kenya

PART 2: Programme Design 5. Permissions, Vacations and Periods of Self-regulation: Using Consumer Insight to Improve HIV Treatment Adherence in Four Central American Countries

6. Generating Local Knowledge: A Role for Ethnography in Evidence-based Programme Design for Social Development

7. Interpretation, Context and Time: An Ethnographically Inspired Approach to Strategy Development for Tuberculosis Control in Odisha, India

8. Designing Health and Leadership Programmes for Young Vulnerable Women Using Participatory Ethnographic Research in Freetown, Sierra Leone

Part 3: Monitoring Processes

9. Using Social Mapping Techniques to Guide Programme Redesign in the Tingim Laip HIV Prevention and Care Project in Papua New Guinea

10. Pathways to Impact: New Approaches to Monitoring and Improving Volunteering for Sustainable Environmental Management

11. Ethnographic Process Evaluation: A Case Study of an HIV Prevention Programme with Injecting Drug Users in the USA

12. Using the Reality Check Approach to Shape Quantitative Findings: Experience from Mixed Method Evaluations in Ghana and Nepal

Part 4: Understanding Impact and Change

13. Innovation in Evaluation: Using SenseMaker to Assess the Inclusion of Smallholder Farmers in Modern Markets

14. The Use of the Rapid PEER Approach for the Evaluation of Sexual and Reproductive Health Programmes

15. Using Interpretive Research to Make Quantitative Evaluation More Effective: Oxfam’s Experience in Pakistan and Zimbabwe

16. Can Qualitative Research Rigorously Evaluate Programme Impact? Evidence from a Randomised Controlled Trial of an Adolescent Sexual Health Programme in Tanzania

Rick Davies Comment: [Though this may reflect my reading biases…]It seems like this strand of thinking has not been in the forefront of M&E attention for a long time (i.e. maybe since the 1990s – early 2000’s) so it is good to see this new collection of papers, by a large collection of both old and new faces (33 in all).

Overview: An open source document clustering and search tool

Overview is an open-source tool originally designed to help journalists find stories in large numbers of documents, by automatically sorting them according to topic and providing a fast visualization and reading interface. It’s also used for qualitative research, social media conversation analysis, legal document review, digital humanities, and more. Overview does at least three things really well.

  • Find what you don’t even know to look for.
  • See broad trends or patterns across many documents.
  • Make exhaustive manual reading faster, when all else fails.

Search is a wonderful tool when you know what you’re trying to find — and Overview includes advanced search features. It’s less useful when you start with a hunch or an anonymous tip. Or there might be many different ways to phrase what you’re looking for, or you could be struggling with poor quality material and OCR error. By automatically sorting documents by topic, Overview gives you a fast way to see what you have .

In other cases you’re interested in broad patterns. Overview’s topic tree shows the structure of your document set at a glance, and you can tag entire folders at once to label documents according to your own category names. Then you can export those tags to create visualizations.

Rick Davies Comment: This service could be quite useful in various ways, including clustering sets of Most Significant Change (MSC) stories, or micro-narratives form SenseMaker type exercises, or collections of Twitter tweets found via a key word search. For those interested in the details, and preferring transparency to apparent magic, Overview uses the k-means clustering algorithm, which is explained broadly here. One caveat, the processing of documents can take some time, so you may want to pop out for a cup of coffee while waiting. For those into algorithms, here is a healthy critique of careless use of k-means clustering i.e. not paying attention to when its assumptions about the structure of the underlying data are inappropriate

It is the combination of searching using keywords, and the automatic clustering that seems to be the most useful, to me…so far. Another good feature is the ability to label clusters of interest with one or more tags

I have uploaded 69 blog postings from my Rick on the Road blog. If you want to see how Overview hierarchically clusters these documents let me know, I then will enter your email, which will then let Overview give you access. It seems, so far, that there is no simple way of sharing access (but I am inquiring).

Feminist Evaluation & Research: Theory & Practice

 

 

Sharon Brisolara PhD (Editor), Denise Seigart PhD (Editor), Saumitra SenGupta PhD (Editor)
Paperback: 368 pages, Publisher: The Guilford Press; Publication Date: March 28, 2014 | ISBN-10: 1462515207 | ISBN-13: 978-1462515202 | Edition: 1
Available on Amazon (though at an expensive US$43 for a paperback!)

No reviews available online as yet, but links to these will be posted here when they become available

CONTENTS

I. Feminist Theory, Research and Evaluation

1. Feminist Theory: Its Domain and Applications, Sharon Brisolara
2. Research and Evaluation: Intersections and Divergence, Sandra Mathison
3. Researcher/Evaluator Roles and Social Justice, Elizabeth Whitmore
4. A Transformative Feminist Stance: Inclusion of Multiple Dimensions of Diversity with Gender, Donna M. Mertens
5. Feminist Evaluation for Nonfeminists, Donna Podems

II. Feminist Evaluation in Practice

6. An Explication of Evaluator Values: Framing Matters, Kathryn Sielbeck-Mathes and Rebecca Selove
7. Fostering Democracy in Angola: A Feminist-Ecological Model for Evaluation, Tristi Nichols
8. Feminist Evaluation in South Asia: Building Bridges of Theory and Practice, Katherine Hay
9. Feminist Evaluation in Latin American Contexts, Silvia Salinas Mulder and Fabiola Amariles

III. Feminist Research in Practice

10. Feminist Research and School-Based Health Care: A Three-Country Comparison, Denise Seigart
11. Feminist Research Approaches to Empowerment in Syria, Alessandra Galié
12. Feminist Research Approaches to Studying Sub-Saharan Traditional Midwives, Elaine Dietsch
Final Reflection. Feminist Social Inquiry: Relevance, Relationships, and Responsibility, Jennifer C. Greene

 

The Science of Evaluation: A Realist Manifesto

Pawson, Ray. 2013. The Science of Evaluation: A Realist Manifesto. UK: Sage Publications. http://www.uk.sagepub.com

Chapter 1 is available as a pdf. Hopefully other chapters will also become available this way, because this 240 page book is expensive.

Contents

Preface: The Armchair Methodologist and the Jobbing Researcher
PART ONE: PRECURSORS AND PRINCIPLES
Precursors: From the Library of Ray Pawson
First Principles: A Realist Diagnostic Workshop
PART TWO: THE CHALLENGE OF COMPLEXITY – DROWNING OR WAVING?
A Complexity Checklist
Contested Complexity
Informed Guesswork: The Realist Response to Complexity
PART THREE: TOWARDS EVALUATION SCIENCE
Invisible Mechanisms I: The Long Road to Behavioural Change
Invisible Mechanisms II: Clinical Interventions as Social Interventions
Synthesis as Science: The Bumpy Road to Legislative Change
Conclusion: A Mutually Monitoring, Disputatious Community of Truth Seekers

Reviews

Multiple Pathways to Policy Impact: Testing an Uptake Theory with QCA

by Barbara Befani, IDS Centre for Development Impact, PRACTICE PAPER. Number 05 October 2013. Available as pdf

Abstract: Policy impact is a complex process influenced by multiple factors. An intermediate step in this process is policy uptake, or the adoption of measures by policymakers that reflect research findings and recommendations. The path to policy uptake often involves activism, lobbying and advocacy work by civil society organisations, so an earlier intermediate step could be termed ‘advocacy uptake’; which would be the use of research findings and recommendations by Civil Society Organisations (CSOs) in their efforts to influence government policy. This CDI Practice Paper by Barbara Befani proposes a ‘broad-brush’ theory of policy uptake (more precisely of ‘advocacy uptake’) and then tests it using two methods: (1) a type of statistical analysis and (2) a variant of Qualitative Comparative Analysis (QCA). The pros and cons of both families of methods are discussed in this paper, which shows that QCA offers the power of generalisation whilst also capturing some of the complexity of middle-range explanation. A limited number of pathways to uptake are identified, which are at the same time moderately sophisticated (considering combinations of causal factors rather than additions) and cover a medium number of cases (40), allowing a moderate degree of generalisation. – See more at: http://www.ids.ac.uk/publication/multiple-pathways-to-policy-impact-testing-an-uptake-theory-with-qca#sthash.HEg4Smra.dpuf

Rick Davies comment: What I  like about this paper is the way it shows, quite simply, how measurements of the contribution of different possible causal conditions in terms of averages, and correlations between these, can be uniformative and even misleading. In contrast, a QCA analysis of the different configurations of causal conditions can be much more enlightening and easier to relate to what are often complex realities in the ground.

I have taken the liberty of re-analysing the fictional data set provided in the annex, using a Decision Tree software (within RapidMiner). This is a means of triangulating the results of QCA analyses. It uses the same kind of data set and produces results which are comparable in structure, but the method of analysis is different. Shown below is a Decision Tree representing seven configurations of conditions that can be found in Befani’s data set of 40 cases. It makes use of 4 of the five conditions described in the paper. These are shown as nodes in the tree diagram.

Befani 2013 10(click on image to enlarge and get a clearer image!)

The 0 and 1 values on the various branches indicate whether the condition immediately above is present or not. The first configuration on the left says that if there is no ACCESS then research UPTAKE (12 cases at the red leaf) does not take place. This is a statement of a sufficient cause. The branch on the right, represents a configuration of three conditions, which says that where ACCESS to research is present, and recommendations are consistent with measures previously (PREV) recommended by the organisation, and where the research findings are disseminated within the organisation by a local ‘champion (CHAMP) then research UPTAKE  (8 cases at the blue leaf) does take place.

Overall the findings shown in the Decision Tree model are consistent with the QCA analyses in terms of the number of configurations (seven) and the configurations that are associated with the largest number of cases (i.e. their coverage). However there were small differences in descriptions of two sets of cases where there was no uptake (red leaves). In the third branch (configuration) from the left above, the QCA analysis indicated that it was the presence of INTERNAL CONFLICT (different approaches to the same policy problem within the organisation) that played a role, rather than the presence of a (perhaps ineffectual) CHAMPION. In the third branch (configuration) from the right the QCA analysis proposed a fourth necessary condition (QUALITY), in addtion to the three shown in the Decision Tree. Here the Decision Tree seems the more parsimonious solution. However, in both sets of cases where differences in findings have occured it would make most sense to then proceed with within-case investigations of the causal processes at work.

PS: Here is the dataset, in case anyone wants to play with it

Webinar series on evaluation: The beginnings of a list

To be extended and updated, with your help!

  • American Evaluation Association: Coffee Break Demonstrations are 20 minute long webinars designed to introduce audience members to new tools, techniques, and strategies in the field of evaluation.
  • INTERACTION: Impact Evaluation Guidance Note and Webinar Series: 8 webinars covering Introduction to Impact Evaluation, Linking Monitoring and Evaluation to Impact Evaluation, Introduction to Mixed Methods in Impact Evaluation, Use of Impact Evaluation Results
  • Measure Evaluation webinars:     20 webinars since Jan 2012
  • Claremont Evaluation Center Webinar Series  “The Claremont Evaluation Center is pleased to offer a series of webinars on the discipline and profession of evaluation.  This series is free and available to anyone across the globe with an internet connection.”
  • MY M&E website: Webinars on Equity-focused evaluations (17 webinars), IOCE webinar series on evaluation associations, Emerging practices in development evaluation (6 webinars), Developing capacities for country M&E systems (16 webinars), Country-led M&E Systems (6 webinars)

Plus some guidance on developing and evaluating webinars

ICAI Seeks Views on Revised Evaluation Framework

 

 “In our first report, ICAI’s Approach to Effectiveness and Value for Money,we set out an evaluation framework, consisting of 22 questions under 4 guiding criteria (objectives, delivery, impact and learning), to guide our lines of enquiry in reviews. In the light of our experience to date in carrying out our reports, we have reviewed this framework. The revised framework is available at this link: ICAI revised evaluation framework

We are now entering a period of consultation on the revised framework which will run until 24 May 2013. If you have any comments or views, please email enquiries@icai.independent.gov.uk  or post them to: The Secretariat, Independent Commission for Aid Impact, Dover House, 66 Whitehall, London SW1A 2AU”

AEA resources on Social Network Analysis and Evaluation

American Evaluation Association (AEA) Social Network  Analysis (SNA) Topical Interest Group (TIG) resources

AEA365 | A Tip-a-Day by and for Evaluators

Who Counts? The power of participatory statistics

Edited By Jeremy Holland, published by Practical Action. 2013

(from the Practical Action website) “Local people can generate their own numbers – and the statistics that result are powerful for themselves and can influence policy. Since the early 1990s there has been a quiet tide of innovation in generating statistics using participatory methods. Development practitioners are supporting and facilitating participatory statistics from community-level planning right up to sector and national-level policy processes. Statistics are being generated in the design, monitoring and evaluation, and impact assessment of development interventions.Through chapters describing policy, programme and project research, Who Counts? provides impetus for a step change in the adoption and mainstreaming of participatory statistics within international development practice. The challenge laid down is to foster institutional change on the back of the methodological breakthroughs and philosophical commitment described in this book. The prize is a win–win outcome in which statistics are a part of an empowering process for local people and part of a real-time information flow for those aid agencies and government departments willing to generate statistics in new ways. Essential reading for researchers and students of international development as well as policy-makers, managers and practitioners in development agencies.”
Table of Contents
1 Introduction Participatory statistics: a ‘win–win’ for international development Jeremy Holland
PART I Participatory statistics and policy change
2 Participatory 3-dimensional modelling for policy and planning: the practice and the potential , Giacomo Rambaldi
3 Measuring urban adaptation to climate change: experiences in Kenya and Nicaragua Caroline Moser and Alfredo Stein
4 Participatory statistics, local decision-making, and national policy design: Ubudehe community planning in Rwanda  ,Ashish Shah
5 Generating numbers with local governments for decentralized health sector policy and planning in the Philippines , Rose Marie R. Nierras
6 From fragility to resilience: the role of participatory community mapping, knowledge management, and strategic planning in Sudan , Margunn Indreboe Alshaikh
Part II Who counts reality? Participatory statistics in monitoring and evaluation ,
7 Accountability downwards, count-ability upwards: quantifying empowerment outcomes from people’s own analysis in Bangladesh , Dee Jupp with Sohel Ibn Ali
8 Community groups monitoring their impact with participatory statistics in India: reflections from an international NGO Collective , Bernward Causemann, Eberhard Gohl, C. Rajathi, A. Susairaj, Ganesh Tantry and Srividhya Tantry,
9 Scoring perceptions of services in the Maldives: instant feedback and the power of increased local engagement , Nils Riemenschneider, Valentina Barca, and Jeremy Holland
10 Are we targeting the poor? Lessons with participatory statistics in Malawi , Carlos Barahona
PART III Statistics for participatory impact assessment
11 Participatory impact assessment in drought policy contexts: lessons from southern Ethiopia , Dawit Abebe and Andy Catley
12 Participatory impact assessment: the ‘Starter Pack Scheme’ and sustainable agriculture in Malawi , Elizabeth Cromwell, Patrick Kambewa, Richard Mwanza, and Rowland Chirwa with KWERA Development Centre,
13 Participatory impact assessments of farmer productivity programmes in Africa Susanne Neubert
Afterword , Robert Chambers
Practical and accessible resources
Index

Real Time Monitoring for the Most Vulnerable

.
.
Greeley, M., Lucas, H. and Chai, J. IDS Bulletin 44.2
Editor Greeley, M. Lucas, H. and Chai, J. Publisher IDS

Purchase a print copy here.

View abstracts online and subscribe to the IDS Bulletin.

Growth in the use of real time digital information for monitoring has been rapid in developing countries across all the social sectors, and in the health sector has been remarkable. Commonly these Real Time Monitoring (RTM) initiatives involve partnerships between the state, civil society, donors and the private sector. There are differences between partners in understanding of objectives,and divergence occurs due to adoption of specific technology-driven approaches and because profit-making is sometimes part of the equation.

With the swarming, especially of pilot mHealth initiatives, in many countries there is risk of chaotic disconnects, of confrontation between rights and profits, and ofoverall failure to encourage appropriate alliances to build sustainable and effective national RTM systems. What is needed is a country-led process for strengthening the quality and equity sensitivity of real-time monitoring initiatives. We propose the development of an effective learning and action agenda centred on the adoption of common standards.

IDS, commissioned and guided by UNICEF Division of Policy and Strategy, has carriedout a multi-country assessment of initiatives that collect high frequency and/or time-sensitive data on risk, vulnerability and access to services among vulnerable children and populations and on the stability and security of livelihoods affected by shocks. The study, entitled Real Time Monitoring for the Most Vulnerable (RTMMV), began with a desk review of existing RTMinitiatives and was followed up with seven country studies (Bangladesh, Brazil,Romania, Senegal, Uganda, Vietnam and Yemen) that further explored and assessed promising initiatives through field-based review and interactive stakeholder workshops. This IDS Bulletin brings together key findings from this research.”

See full list of papers on this topic at the IDS Bulletin  http://www.ids.ac.uk/publication/real-time-monitoring-for-the-most-vulnerable