Social Network Analysis for [M&E of] Program Implementation

Valente, T.W., Palinkas, L.A., Czaja, S., Chu, K.-H., Brown, C.H., 2015. Social Network Analysis for Program Implementation. PLoS ONE 10, e0131712. doi:10.1371/journal.pone.0131712 Available as pdf

“Abstract: This paper introduces the use of social network analysis theory and tools for implementation research. The social network perspective is useful for understanding, monitoring, influencing, or evaluating the implementation process when programs, policies, practices, or principles are designed and scaled up or adapted to different settings. We briefly describe common barriers to implementation success and relate them to the social networks of implementation stakeholders. We introduce a few simple measures commonly used in social network analysis and discuss how these measures can be used in program implementation. Using the four stage model of program implementation (exploration, adoption, implementation, and sustainment) proposed by Aarons and colleagues [1] and our experience in developing multi-sector partnerships involving community leaders, organizations, practitioners, and researchers, we show how network measures can be used at each stage to monitor, intervene, and improve the implementation process. Examples are provided to illustrate these concepts. We conclude with expected benefits and challenges associated with this approach”.

Selected quotes:

“Getting evidence-based programs into practice has increasingly been recognized as a concern in many domains of public health and medicine [4, 5]. Research has shown that there is a considerable lag between an invention or innovation and its routine use in a clinical or applied setting [6]. There are many challenges in scaling up proven programs so that they reach the many people in need [7–9].”

“Partnerships are vital to the successful adoption, implementation and sustainability of successful programs. Indeed, evidence-based programs that have progressed to implementation and translation stages report that effective partnerships with community-based, school, or implementing agencies are critical to their success [11, 17, 18]. Understanding which partnerships can be created and maintained can be accomplished via social network analysis.”

The median impact narrative

Rick Davies comment: The text below is an excerpt from a longer blog posting found here: Impact as narrative, by Bruce Wydick

I want to suggest one particular tool that I will call the “median impact narrative,” which (though not precisely the average–because the average typically does not factually exist) recounts the narrative of the one or a few of the middle-impact subjects in a study. So instead of highlighting the outlier, Juana, who has built a small textile empire from a few microloans, we conclude with a paragraph describing Eduardo, who after two years of microfinance borrowing, has dedicated more hours to growing his carpentry business and used microloans to weather two modest-size economic shocks to his household, an illness to his wife and the theft of some tools. If one were to choose the subject for the median impact narrative rigorously it could involve choosing the treated subject whose realized impacts represent the closest Euclidean distance (through a weighting of impact variables via the inverse of the variance-covariance matrix) to the estimated ATTs.

Consider, for example, the “median impact narrative” of the outstanding 2013 Haushofer and Shapiro study of GiveDirectly, a study finding an array of substantial impacts from unconditional cash transfers in Kenya. The median impact narrative might recount the experience of Joseph, a goat herder with a family of six who received $1100 in five electronic cash transfers. Joseph and his wife both have only two years of formal schooling and have always struggled to make ends meet with their four children. At baseline, Joseph’s children went to bed hungry an average of three days a week. Eighteen months after receiving the transfers, his goat herd increased by 51%, bringing added economic stability to his household. He also reported a 30% reduction in his children going to bed hungry in the period before the follow-up survey, and a 42% reduction in number of days his children went completely without food. Tests of his cortisol indicated that Joseph experienced a reduction in stress, about 0.14 standard deviations relative to same difference in the control group. This kind of narrative on the median subject from this particular study cements a truthful image of impact into the mind of a reader.

A false dichotomy has emerged between the use of narrative and data analysis; either can be equally misleading or helpful in conveying truth about causal effects. As researchers begin to incorporate narrative into their scientific work, it will begin to create a standard for the appropriate use of narrative by non-profits, making it easier to insist that narratives present an unbiased picture that represents a truthful image of average impacts.”

Some of the attached readers’ Comments are also of interest e.g.

The basic point is a solid and important one: sampling strategy matters to qualitative work and for understanding what really happened for a range of people.

One consideration for sampling is that the same observables (independent vars) that drive sub-group analyses can also be used to help determine a qualitative sub-sample (capturing medians, outliers in both directions, etc).

A second consideration, in the spirit of lieberman’s call for nested analyses (or other forms of linked and sequential qual-quant work), the results of quantitative work can be used to inform sampling of later qualitative work, targeting those representing the range of outcomes values.”

Read more on this topic from this reader here http://blogs.worldbank.org/publicsphere/1-2014

Rick Davies comment: If the argument for using median impact narratives is accepted the interesting question for me is then how do we identify median cases? Bruce Wydick seems to suggest above that this would be done by looking at impact measures and finding a median case among those (Confession: I don’t fully understand his reference to Euclidean distance and ATTs). I would argue that we need to look at median-ness not only in impacts, but also in other attributes of the cases, including the context and interventions experienced by each case. One way of doing this is to measure and use Hamming distance as a measure of similarity between cases, an idea I have discussed elsewhere. This can be done with very basic categorical data, as well as variable data

Postscript: Some readers might ask “Why not simply choose sources of impact narratives from a randomised sample of cases, as you might do with quantitative data? Well, with a random sample of quantitative data you can average the responses. But you just cannot do that with a random sample of narrative data, there is no way of “averaging” the content of a set of texts. But you would end up with a set of stories that readers might then themselves “average out” into one overall impression in their own minds. But that will not be a very transparent or consistent process.

What methods may be used in impact evaluations of humanitarian assistance?

Jyotsna Puri, Anastasia Aladysheva, Vegard Iversen, Yashodhan Ghorpade, Tilman Brück, International Initiative for Impact Evaluation (3ie) Working Paper 22, December 2014. Available as pdf

“Humanitarian crises are complex situations where the demand for aid has traditionally far exceeded its supply. The humanitarian assistance community has
long asked for better evidence on how each dollar should be effectively spent. Impact evaluations of humanitarian assistance can help answer these questions and
also respond to the increasing call to estimate the impact of humanitarian assistance and supplement the rich tradition for undertaking real-time and process evaluations
in the sector. This working paper gives an overview of the methodological techniques that can be used to address some of the important questions in this area, while
simultaneously considering the special circumstances and constraints associated with humanitarian assistance.”

Executive summary
1. Introduction
2. Defining and categorising humanitarian emergencies and humanitarian action
3. Defining and discussing high-quality, theory-based impact evaluations 
3.1 Various forms of evaluations
3.2 Impact evaluations in non-emergency settings
3.3 Impact evaluations in emergency settings
3.4 Objectives of impact evaluations
3.5 Methods for impact evaluations
4. A conceptual framework for using impact evaluations in humanitarian emergencies.
5. Impact evaluations of humanitarian assistance: a review of the literature .
5.1 Emergency relief
5.2 Recovery and resilience
5.3 General discussion on methods used by studies
6. Using appropriate methods to overcome ethical concerns
7. Case studies
Case study 1: Multiple interventions or a multi-agency intervention
Case study 2: Unanticipated emergencies
Case study 3: A complex emergency involving flooding and conflict
Case study 4: A protracted emergency – internally displaced peoples in DRC
Case study 5: Using impact evaluations to estimate the effect of assistance after typhoons in the Philippines
Case study 6: Using impact evaluations to estimate the effect of assistance in the recovery phase in the absence of ex ante planning
8. Conclusions 
Appendix A : Table on impact evaluations of humanitarian relief

Case-Selection [for case studies]: A Diversity of Methods and Criteria

Gerring, J., Cojocaru, L., 2015. Case-Selection: A Diversity of Methods and Criteria. January 2015 Available as pdf

Excerpt: “Case-selection plays a pivotal role in case study research. This is widely acknowledged, and is implicit in the practice of describing case studies by their method of selection – typical, deviant, crucial, and so forth. It is also evident in the centrality of case-selection in methodological work on the case study, as witnessed by this symposium. By contrast, in large-N cross-case research one would never describe a study solely by its method of sampling. Likewise, sampling occupies a specialized methodological niche within the literature and is not front-and-center in current methodological debates. The reasons for this contrast are revealing and provide a fitting entrée to our subject.

First, there is relatively little variation in methods of sample construction for cross-case research. Most samples are randomly sampled from a known population or are convenience samples, employing all the data on the subject that is available. By contrast, there are myriad approaches to case-selection in case study research, and they are quite disparate, offering many opportunities for researcher bias in the selection of cases (“cherry-picking”).

Second, there is little methodological debate about the proper way to construct a sample in cross-case research. Random sampling is the gold standard and departures from this standard are recognized as inferior. By contrast, in case study research there is no consensus about how best to choose a case, or a small set of cases, for intensive study.

Third, the construction of a sample and the analysis of that sample are clearly delineated, sequential tasks in cross-case research. By contrast, in case study research they blend into one another. Choosing a case often implies a method of analysis, and the method of analysis may drive the selection of cases.

Fourth, because cross-case research encompasses a large sample – drawn randomly or incorporating as much evidence as is available – its findings are less likely to be driven by the composition of the sample. By contrast, in case study research the choice of a case will very likely determine the substantive findings of the case study.

Fifth, because cross-case research encompasses a large sample claims to external validity are fairly easy to evaluate, even if the sample is not drawn randomly from a well-defined population. By contrast, in case study research it is often difficult to say what a chosen case is a case of – referred to as a problem of “casing.”

Finally, taking its cue from experimental research, methodological discussion of cross-case research tends to focus on issues of internal validity, rendering the problem of case-selection less relevant. Researchers want to know whether a study is true for the studied sample. By contrast, methodological discussion of case study research tends to focus on issues of external validity. This could be a product of the difficulty of assessing case study evidence, which tends to demand a great deal of highly specialized subject expertise and usually does not draw on formal methods of analysis that would be easy for an outsider to assess. In any case, the effect is to further accentuate the role of case-selection. Rather than asking whether the case is correctly analyzed readers want to know whether the results are generalizable, and this leads back to the question of case-selection.”

Other recent papers on case selection methods:

Herron, M.C., Quinn, K.M., 2014. A Careful Look at Modern Case Selection Methods. Sociological Methods & Research
 Nielsen, R.A., 2014. Case Selection via Matching. http://www.mit.edu/~rnielsen/Case%20Selection%20via%20Matching.pdf

Participatory Approaches (to impact evaluation – a pluralist view)

Methodological Briefs. Impact Evaluation No. 5 by Irene Guijt (and found via the Better Evaluation website). Available as pdf.

“This guide, written by Irene Guijt for UNICEF, looks at the use of participatory approaches in impact evaluation…..By asking the question, ‘Who should be involved, why and how?’ for each step of an impact evaluation, an appropriate and context-specific participatory approach can be developed”

Contents

  • Participatory approaches: a brief description
  • When is it appropriate to use this method?
  • How to make the most of participatory approaches
  • Ethical concerns
  • Which other methods work well with this one?
  • Participation in analysis and feedback of results
  • Examples of good practices and challenges

Rick Davies comment: I like the pluralist approach this paper takes towards the use of participatory approaches. It is practically oriented rather than driven by a ideological type of belief that peoples participation must always be maximised. That said, I did find  Table 1 “Types of participation by programme participants in impact evaluation” out of place, because it was a typology with a very simple linear scale with fairly obvious indications of not only what kinds of  participation are possible,but which ones are more desirable. On the other hand I thought Box 3 was really useful, because it spelled out a number of useful questions to ask about possible forms of participation at each stage of the evaluation design, implementation and review process. It is worth noting that given the 22 questions, and assuming for arguments sake they each had binary answers, this means there are at least 2 to the power of 22 different types of ways of building participation into an  evaluation i.e 4,194,304 ways! That seems a bit closer to reality to me, relative to the earlier classification of four types in Table 1

I think the one area here where I would like more detail and examples is on participatory approaches to the analysis of data. Not the collection of data, but its analysis. There is some discussion on page 11 about causality, which would be great to see further developed. I often feel that this is an area of participatory practice where a yellow post-it note might as well placed, saying “here a miracle occurs”

The use of Data Envelopment Analysis to calculate priority scores in needs assessments

by Aldo Benini, July 2015

Priority indices have grown popular for identifying communities most affected by disasters. Responders have produced a number of formats and formulas. Most of these combine indicators using weights and aggregations decided by analysts. Often the rationales for these are weak. In such situations, a data-driven methodology may be preferable. This note discusses the suitability of different approaches. It offers a basic tutorial of a DEA freeware application that works closely with MS Excel. The demo data are from the response to Typhoon Haiyan in the Philippines 2013. . – Mirrored from the Assessment Capacities Project (ACAPS) Web site with permission.

Rick Davies comment: I have dipped into this paper and resolved to learn more about Data Envelope Analysis. It looks like it could be quite useful.

Qualitative Comparative Analysis – A Rigorous Qualitative Method for Assessing Impact

A Coffey How-To note, June 2015, by Carrie Baptist and Barbara Befani. Available as pdf

Summary

  • QCA is a case based method which allows evaluators to identify different combinations of factors that are critical to a given outcome, in a given context. This allows for a more nuanced understanding of how different combinations of  factors can lead to success, and the influence context can have on success.
  • QCA allows evaluators to test theories of change and answer the question ‘what works best, why and under what circumstances’ in a way that emerges directly from the empirical analysis, that can be replicated by other  researchers, and is generalizable to other contexts.
  • While it isn’t appropriate for use in all circumstances and has limitations, QCA also has certain unique strengths – including qualitatively assessing impact and identifying multiple pathways to achieving change which make it a valuable addition to the evaluation toolkit.

Rick Davies comment: The availability of this sort of explanatory and introductory note is very timely, given the increased use of QCA for evaluation purposes. My only quibble with this how-to note is that the heart of the QCA process seems to have been left undescribed (see step 10, page 6), like the proverbial  black box. For those looking for a more detailed exposition, keep an eye out for the extensive guide now being prepared by Barbara Befani, with support from the Expert Group for Aid Studies in Sweden (More details here). There is also an introductory posting on QCA on the Better Evaluation website

See also: This new listing of use of QCA for evaluation purposes http://www.compasss.org/bibliography/evalua.htm

Category refinement in humanitarian needs assessments

MODERATE NEED, ACUTE NEED Valid categories for humanitarian needs assessments? Evidence from a recent needs assessment in Syria
26 MARCH 2015 by Aldo Benini,

Needs assessments in crises seek to establish, among other elements, the number of persons in need. “Persons in need” is a broad and fuzzy concept; the estimates that local key informants provide are highly uncertain. Would refining the categories of persons in need lead to more reliable estimates?

The Syria Multi-Sectoral Needs Assessment (MSNA), in autumn 2014, provided PiN estimates for 126 of the 270 sub-districts of the country. It differentiated between persons in moderate and those in acute need. “Moderate Need, Acute Need – Valid Categories for Humanitarian Needs Assessments?” tests the information value of this distinction. The results affirm that refined PiN categories can improve the measurement of unmet needs under conditions that rarely permit exact classification. The note ends with some technical recommendations for future assessments.”

Impact Evaluation: A Guide for Commissioners and Managers

Prepared by Elliot Stern for the Big Lottery Fund, Bond, Comic Relief
and the Department for International Development, May 2015
Available as pdf

1. Introduction and scope 2
2. What is impact evaluation? 4
Defining impact and impact evaluation 4
Linking cause and effect 5
Explanation and the role of ‘theory’ 7
Who defines impact? 7
Impact evaluation and other evaluation approaches 8
Main messages 9

3. Frameworks for designing impact evaluation 10
Designs that support causal claims 10
The design triangle 11
Evaluation questions 11
Evaluation designs 13
Programme attributes 14
Main messages 15

4. What different designs and methods can do 16
Causal inference: linking cause and effect 16
Main types of impact evaluation design 20
The contemporary importance of the ‘contributory’ cause 21
Revisiting the ‘design triangle’ 21
Main messages 23

5. Using this guide 24
Drawing up terms of reference and assessing proposals for impact evaluations 25
Assessing proposals 25
Quality of reports and findings 27
Strengths of conclusions and recommendations 28
Using findings from impact evaluations 29
Main messages 29
Annex 30

 

Beneficiary Feedback in Evaluation

Produced for DFID Evaluation Department by Lesley Groves, February 2015. Available as a pdf

The purpose of this paper is to analyse current practice of beneficiary feedback in evaluation and to stimulate further thinking and activity in this area. The Terms of Reference required a review of practice within DFID and externally. This is not a practical guide or How to Note, though it does make some recommendations on how to improve the practice of beneficiary feedback in evaluation. The paper builds on current UK commitments to increasing the voice and influence of beneficiaries in aid programmes. It has been commissioned by the Evaluation Department of the UK Department for International Development (DFID).

Evidence base

The paper builds on:

  • A review of over 130 documents (DFID and other development agencies), including policy and practice reports, evaluations and their Terms of Reference, web pages, blogs, journal
    articles and books;
  • Interviews with 36 key informants representing DFID, INGOs, evaluation consultants/consultancy firms and a focus group with 13 members of the Beneficiary
    Feedback Learning Partnership;
  • Contributions from 33 practitioners via email and through a blog set up for the purpose of this research (https://beneficiaryfeedbackinevaluationandresearch.wordpress.com/) and;
  • Analysis of 32 evaluations containing examples of different types of beneficiary feedback.

It is important to note that the research process revealed that the literature on beneficiary feedback in evaluation is scant. Yet, the research process revealed that there is a strong appetite for developing a shared understanding and building on existing, limited practice.

Contents
Executive Summary
Introduction
Part A: A Framework for a Beneficiary Feedback Approach to Evaluation
A.1 Drawing a line in the sand: defining beneficiary feedback in the context of evaluation
A.1.1 Current use of the term “beneficiary feedback”
A.1.2 Defining “Beneficiary”
A.1.3 Defining “Feedback
A.2 Towards a framework for applying a “beneficiary feedback” approach in the context of evaluation
A.3 A working definition of beneficiary feedback in evaluation
Part B: Situating Beneficiary Feedback in Current Evaluation Practice
B.1 Situating beneficiary feedback in evaluation within DFID systems and evaluation standards
B.1.1 Applying a beneficiary feedback approach to evaluation within DFID evaluations
B.1.2 Inclusion of beneficiary feedback in evaluation policies, standards and principles
B.2 Learning from experience: Assessment of current practice
B.2.1 Existing analysis of current performance of beneficiary feedback in the development sector generally
B.2.2 Specific examples of beneficiary feedback in evaluation
Part C: Enhancing Evaluation Practice through a Beneficiary Feedback Approach
C.1 How a beneficiary feedback approach can enhance evaluation practice
C.2 Checklists for evaluation commissioners and practitioners
C.3 What are the obstacles to beneficiary feedback in evaluation and how can they

Postscript 2015 05 See also the associated checklists on this blog page:  Downloadable Checklist for Commissioners and Evaluators

Rick Davies Comment: I am keen on the development and use of checklists, for a number of reasons. They encourage systematic attention to a range of relevant issues and make lack of attention to  any of these more visible and accountable. But I also like Scriven’s comments on checklists:

“The humble checklist, while no one would deny its utility in evaluation and elsewhere, is usually
thought to fall somewhat below the entry level of what we call a methodology, let alone a theory.
But many checklists used in evaluation incorporate a quite complex theory, or at least a set of
assumptions, which we are well advised to uncover; and the process of validating an evaluative
checklist is a task calling for considerable sophistication. Indeed, while the theory underlying a
checklist is less ambitious than the kind that we normally call a program theory, it is often all the
theory we need for an evaluation”

Scriven’s comments prompt me to ask, in the case of Lesley Grove’s checklists, if the attributes listed in the checklists are what we ideally should find in an evaluation, and many or all are in fact found to be present, then what outcome(s) might we then expect to see associated with these features of an evaluation? On page 23 of her report she lists four possible desirable outcomes:

  • Generation of more robust and rigorous evaluations particularly to ensure unintended and
    negative consequences are understood;
  • Reduction of participation fatigue and beneficiary burden through processes that respect
    participants and enable them to engage in meaningful ways;
  • Supporting of development and human rights outcomes;
  • Making programmes more relevant and responsive

With this list we are on our way to  having a testable theory of how beneficiary feedback can improve evaluations.

The same chapter of the report goes even further, identifying the different types of outcomes that could be expected from different combinations of usages of beneficiary feedback, in a four by four matrix (see page 27).

%d bloggers like this: