An introduction to systematic reviews

Book publishedin March 2012, by Sage. Authors: David Gough, Sandy Oliver, James Thomas

Read Chapter One pdf: Introducing systematic reviews

Contents:

1. Introducing Systematic Reviews David Gough, Sandy Oliver and James Thomas
2. Stakeholder Perspectives and Participation in Reviews Rebecca Rees and Sandy Oliver
3. Commonality and Diversity in Reviews David Gough and James Thomas
4. Getting Started with a Review Sandy Oliver, Kelly Dickson, and Mark Newman
5. Information Management in Reviews Jeff Brunton and James Thomas
6. Finding Relevant Studies Ginny Brunton, Claire Stansfield & James Thomas
7. Describing and Analysing Studies Sandy Oliver and Katy Sutcliffe
8. Quality and Relevance Appraisal Angela Harden and David Gough
9. Synthesis: Combining results systematically and appropriately James Thomas, Angela Harden and Mark Newman
10. Making a Difference with Systematic Reviews Ruth Stewart and Sandy Oliver
11. Moving Forward David Gough, Sandy Oliver and James Thomas

Analytic Rigour in Information Analysis – Lessons from the intelligence community?

This post was prompted by a blog posting by Irene Guijt about a presentation by Michael Patton at a workshop in Wageningen last week (which I also attended). The quotes below come from a webpage about Zelik, Patterson and Woods’ Rigour Attribute Model , which outlines eight attributes of a rigorous process of information analysis, along with guidance on recognising the extent to which each criteria has been met.

The model is summarised in this Analytical Rigor Poster (PDF)

Quotes from the website

“The proliferation of data accessibility has exacerbated the risk of shallowness in information analysis, making it increasingly difficult to tell when analysis is sufficient for making decisions or changing plans, even as it becomes increasingly easy to find seemingly relevant data. In addressing the risk of shallow analysis, the assessment of rigor emerges as an approach for coping with this fundamental uncertainty, motivating the need to better define the concept of analytical rigor.”

“Across information analysis domains, it is often difficult to recognize when analysis is inadequate for a given context. A better understanding of rigor is an analytic broadening check to be leveraged against this uncertainty. The purpose of this research is to refine the understanding of rigor, exploring the concept within the domain of intelligence analysis. Nine professional intelligence analysts participated in a study of how analytic rigor is judged. The results suggest a revised definition of rigor, reframing it as an emergent multi-attribute measure of sufficiency rather than as a measure of process deviation. Based on this insight, a model for assessing rigor was developed, identifying eight attributes of rigorous analysis. Finally, an alternative model of briefing interactions is proposed that integrates this framing of rigor into an applied context. This research, although specific in focus to intel analysis, shows the potential to generalize across forms of information analysis.

The references  provided include:

Zelik, D. J., Patterson, E. S., & Woods, D. D. (2010). Measuring attributes of rigor in information analysis. In E. S. Patterson & J. E. Miller (Eds.), Macrocognition metrics and scenarios: Design and evaluation for real-world teams. Aldershot, UK: Ashgate. (ISBN: 978-0-7546-7578-5) Currently, the best source for a detailed discussion of our ongoing research on analytical rigor is this forthcoming book chapter which proposes rigor as a macrocognitive measure of expert performance.

Zelik, D., Patterson, E. S., & Woods, D. D. (2007, June). Understanding rigor in information analysis. Paper presented at the 8th International Conference on Naturalistic Decision Making, Pacific Grove, CA. (PDF) (VIDEO) This paper, presented at the Eighth International Naturalistic Decision Making Conference, provides a more formal overview of our current research.

Modeling Rigor in Information Analysis: A Metric for Rigor Poster (PDF) This poster provides an overview of the rigor model, identifying the aspects of the attributes that contribute to low, moderate, and high rigor analysis processes. It also overviews the rigor metric as applied to the LNG Scenario study.

Reducing the Risk of Shallow Information Analysis Google TechTalk  David D. Woods’ discussion of our analytical rigor research at a Google TechTalk provides a dynamic presentation of the material. Google TechTalks are designed to disseminate a wide spectrum of views on topics including Current Affairs, Science, Medicine, Engineering, Business, Humanities, Law, Entertainment, and the Arts. This talk was originally recorded on on April 10, 2007.

THEORY OF CHANGE REVIEW – A report commissioned by Comic Relief

Cathy James, September 2011. 33 pages. Available as pdf.

The review approach Comic Relief’s international grants team commissioned this review to capture staff and partners’ experiences in using theory of change; to identify others in development that are using theory of change and analyse their different approaches; and to draw together learning from everyone to inform what Comic Relief does next.

The review combined analysis of literature with 32 short interviews of people with experience and knowledge of theory of change. The literature included reports, guidelines, study notes, theory of change examples and other relevant documents. The review included interviews with members of Comic Relief’s international grants team; Comic Relief grant partners (both UK and southern organisations); freelance consultants; UK organisation development consultants and researchers; North American research organisations, consultancy groups and foundations; International Nongovernmental organisations (INGOs); and academics.
This report was commissioned by Comic Relief and written by Cathy James, an independent consultant. The views expressed in this report are those of the author and do not necessarily represent the views of Comic Relief

Contents

A. INTRODUCTION

A1. Why do this review?

A2. How was the review approached?

A3. What does the review cover?

B. WHAT IS THEORY OF CHANGE?

B1. What are the origins of theory of change?

B2. Who is interested in theory of change?

B3. What do people mean by theory of change?

B4. What approaches are people taking to theory of change?

B5. How is theory of change different and how does it fit with other processes?

C. HOW IS COMIC RELIEF USING THEORY OF CHANGE?

C1. How has Comic Relief’s international grants team used theory of change?

C2. How have Comic Relief partners used theory of change?

D. WHAT DIFFERENCE HAS THEORY OF CHANGE MADE?

DI. What difference has theory of change made to Comic Relief partners?

D2. What do others say about the benefits of using theory of change?

E. WHAT HAS BEEN LEARNED ABOUT USING THEORY OF CHANGE?

E1. Who is theory of change most useful for?

E2. What kind of approach has been most helpful?

E3. What have been the main challenges?

F. CONCLUSIONS AND RECOMMENDATIONS

F1. Hot topics

F2. Conclusions

F3. Some suggestions for those using or advocating theory of change to think about

 

 

Peacebuilding with impact: Defining Theories of Change

Care International UK, January 2012. 12 pages. Available as pdf

Executive Summary: “Focusing on theories of change can improve the effectiveness of peacebuilding interventions. A review of 19 peacebuilding projects in three confict-affected countries found that the process of articulating and reviewing theories of change adds rigour and transparency, clarifes project logic, highlights assumptions that need to be tested, and helps identify appropriate participants and partners. However, the approach has limitations, including the diffculty of gathering theory validating evidence.

While they are not a panacea, devoting greater attention to theories of change is a simple and relatively inexpensive means of increasing the quality of peacebuilding interventions. Donors and peacebuilding agencies should review their procedures to encourage and accommodate more widespread focus on theories of change, and ensure adequate resources are set aside to allow appropriate monitoring of these theories throughout the life of an intervention.

A focus on theories of change led to the following key fndings:
• Clarifying project logic helps highlight tenuous assumptions;
• Clearly identifying the aims of activities and measures of success strengthens project design;
• Determining the appropriate actors to work with, and not just the easy-to-reach, enables better programme focus;
• More explicit links need to be made between local level activities and national peace processes for desired changes to occur;
• Confict analysis is critical for determining the relevance of activities but is rarely done;
• Staff often require support in ensuring their theories of change are suffciently explicit;
• Current project planning tools do not help practitioners articulate their theories of change;
• Gathering evidence to validate a theory of change is challenging, particularly in conditions of conflict and fragility;
• Critical review of theories of change needs to be undertaken in conjunction with other forms of evaluation to have maximum value;
• Theories of change can encourage an overly linear approach, when change in con?ict contexts can be more organic or systemic.

Recommendations:
1 Donors should revise their logical frameworks guidance to encourage the use of theories of change, notably to include them within the ‘assumptions and risks’ column of existing logical frameworks or by adding an additional column.
2 Theories of change need to be as precise, nuanced and contextually specific as possible and be based on broad conflict analysis.
3 Practitioners need to articulate theories of change within a hierarchy of results and to review these periodically throughout the implementation of a project, particularly if conflict dynamics change.
4 Donors should encourage funded agencies to review their theories of change throughout the project cycle and make resources available for this.”

Assessing the immediate impact of impact studies – using an online survey

On February 23rd, the Stanford Social Innovation Review asked its readers to predict the results of two randomised control trials  (RCTs), before they become publicly available. Both studies “tested whether consulting services can help enterprises grow. In other words, with nothing more than advice, can small firms or microenterprises increase their profits? Or are they already optimizing, given their resources?

The website provides some background information on both interventions and the aims of each study. It also provides four different possible outcomes of the study, for participants to choose from. A modest prize is offered for participants who correctly predict the study findings.

The authors provide this description of their intentions: ” With this experiment, we also are taking a baby step toward a more ambitious idea—to have a market in predicting the results of randomized trials. Such a market would serve two purposes. First, it would allow stakeholders to stake their claim (pun intended) on their predictions and be held to acclaim when they are right or to have their opinions challenged when they are wrong. Second, such a market could help donors, practitioners, and policymakers make decisions about poverty programs, by engaging the market’s collective wisdom. (Think www.intrade.com, but for results of social impact interventions.)

The last sentence seems to imply that the market, correctly designed and managed, will deliver successful predictions. This has been found to be the case in some other fields, but it may or may not be the case with the results of RCT trials.

There is another potentially valuable use of the same process. A “pre-dissemination of results” survey would establish a baseline measure of public understanding in the field under investigation [with the caveat that the profile of the particular participating” public” would need to be made clear]. For example, 30% of survey participants may have successfully predicted that Outcome 1 would be supported by the RCT findings. After the RCT findings were shared with participants a follow survey of the same participants could easily then ask something like “Do you accept the validity of the findings?” or some thing more general like “Have these results been sufficient to change your mind on this issue?” The percentage of participants who made wrong predictions but accepted the study results would then be a reasonable measure of immediate impact. [Fortunately the SSIR survey includes a request for participant email addresses, which are necessary if they are to receive their prize].

Bearing this in mind, it would be good if the Review could provide its readers with some analysis of the overall distribution of the predictions made by participants, not just information on who the winner was.

PS: The same predict-disclose-compare process can also be used in face to face settings such as workshops designed to disseminate the  findings of impact assessments, and has undoubtedly beeen used by others before today [including by myself with Proshika staff in Bangladesh, many years ago]

[Thanks to @carolinefiennes for alerting me to this article]

PS 14 March 2012: See Posting Hypotheses for an Impact Study of Compartamos by Dean Karlan   where one of his objectives is to be able to compare found results with prior opinions

“Six Years of Lessons Learned in Monitoring and Evaluating Online Discussion Forums”

by Megan Avila, Kavitha Nallathambi, Catherine Richey, Lisa Mwaikambo– in Knowledge Management & E-Learning: An International Journal (KM&EL), Vol 3, No 4 (2011)

….which looks at how to evaluate virtual discussion forums held on the IBP (Implementing Best Practices in Reproductive Health) Knowledge Gateway – a platform for global health practitioners to exchange evidence-based information and knowledge to inform practice. Available as pdf  Found courtesy of Yaso Kunaratnam, IDS

Abstract: “This paper presents the plan for evaluating virtual discussion forums held on the Implementing Best Practices in Reproductive Health (IBP) Knowledge Gateway, and its evolution over six years. Since 2005, the World Health Organization Department of Reproductive Health and Research (WHO/RHR), the Knowledge for Health (K4Health) Project based at Johns Hopkins Bloomberg School of Public Health’s Center for Communication Programs (JHU?CCP), and partners of the IBP Initiative have supported more than 50 virtual discussion forums on the IBP Knowledge Gateway. These discussions have provided global health practitioners with a platform to exchange evidence-based information and knowledge with colleagues working around the world. In this paper, the authors discuss challenges related to evaluating virtual discussions and present their evaluation plan for virtual discussions. The evaluation plan included the following three stages: (I) determining value of the discussion forums, (II) in-depth exploration of the data, and (III) reflection and next steps and was guided by the “Conceptual Framework for Monitoring and Evaluating Health Information Products and Services” which was published as part of the Guide to Monitoring and Evaluation of Health Information Products and Services. An analysis of data from 26 forums is presented and discussed in light of this framework. The paper also includes next steps for improving the evaluation of future virtual discussions.”

 

Evaluating the Evaluators: Some Lessons from a Recent World Bank Self-Evaluation

February 21, 2012 blog posting by Johannes Linn, at Brookings
Found via @WorldBank_IEG tweet

“The World Bank’s Independent Evaluation Group (IEG) recently published a self-evaluation of its activities. Besides representing current thinking among evaluation experts at the World Bank, it also more broadly reflects some of the strengths and gaps in the approaches that evaluators use to assess and learn from the performance of the international institutions with which they work…. Johannes Linn served as an external peer reviewer of the self-evaluation and provides a bird’s-eye view on the lessons learned.

Key lessons as seen by Linn

  • An evaluation of evaluations should focus not only on process, but also on the substantive issues that the institution is grappling with.
  • An evaluation of the effectiveness of evaluations should include a professional assessment of the quality of evaluation products.
  • An evaluation of evaluations should assess:
    o How effectively impact evaluations are used;
    o How scaling up of successful interventions is treated;
    o How the experience of other comparable institutions is utilized;
    o Whether and how the internal policies, management practices and incentives of the institution are effectively assessed;
    o Whether and how the governance of the institution is evaluated; and
    o Whether and how internal coordination, cooperation and synergy among units within the organizations are assessed

Read the complete posting, with arguments behind each of the above points, here

AEA Conference: Evaluation in Complex Ecologies

Relationships, Responsibilities, Relevance
26th Annual Conference of the American Evaluation Association
Minneapolis, Minnesota, USA
Conference: October 24-27, 2012
Workshops: October 22, 23, 24, 28

“Evaluation takes place in complex global and local ecologies where we evaluators play important roles in building better organizations and communities and in creating opportunities for a better world. Ecology is about how systems work, engage, intersect, transform, and interrelate. Complex ecologies are comprised of relationships, responsibilities, and relevance within our study of programs, policies, projects, and other areas in which we carry out evaluations.

Relationships. Concern for relationships obliges evaluators to consider questions such as: what key interactions, variables, or stakeholders do we need to attend to (or not) in an evaluation? Evaluations do not exist in a vacuum disconnected from issues, tensions, and historic and contextualized realities, systems, and power dynamics. Evaluators who are aware of the complex ecologies in which we work attend to relationships to identify new questions and to pursue new answers. Other questions we may pursue include:

  • Whose interests and what decisions and relationships are driving the evaluation context?
  • How can evaluators attend to important interactions amidst competing interests and values through innovative methodologies, procedures, and processes?

Responsibilities. Attention to responsibilities requires evaluators to consider questions such as: what responsibilities, inclusive of and beyond the technical, do we evaluators have in carrying out our evaluations? Evaluators do not ignore the diversity of general and public interests and values in evaluation. Evaluations in complex ecologies make aware ethical and professional obligations and understandings between parties who seek to frame questions and insights that challenge them. Other questions we may pursue include:

  • How can evaluators ensure their work is responsive, responsible, ethical, equitable, and/or transparent for stakeholders and key users of evaluations?
  • In what ways might evaluation design, implementation, and utilization be responsible to issues pertinent to our general and social welfare?

Relevance. A focus on relevance leads to evaluations that consider questions such as: what relevance do our evaluations have in complex social, environmental, fiscal, institutional, and/or programmatic ecologies? Evaluators do not have the luxury of ignoring use, meaning, of sustainability; instead all evaluations require continual review of purposes, evaluands, outcomes, and other matters relevant to products, projects, programs, and policies. Other questions we may pursue include:

  • How can evaluators ensure that their decisions, findings, and insights are meaningful to diverse communities, contexts, and cultures?
  • What strategies exist for evaluators, especially considering our transdisciplinary backgrounds, to convey relevant evaluation processes, practices, and procedures?

Consider this an invitation to submit a proposal for Evaluation 2012 and join us in Minneapolis as we consider evaluation in complex ecologies where relationships, responsibilities, and/or relevance are key issues to address.”

 

BEHIND THE SCENES: MANAGING AND CONDUCTING LARGE SCALE IMPACT EVALUATIONS IN COLOMBIA

by Bertha Briceño, Water and Sanitation Program, World Bank; Laura Cuesta, University of Wisconsin-Madison, Orazio Attanasio, University College London
December 2011, 3ie Working Paper 14, available as pdf

“Abstract: As more resources are being allocated to impact evaluation of development programs,the need to map out the utilization and influence of evaluations has been increasingly highlighted. This paper aims at filling this gap by describing and discussing experiences from four large impact evaluations in Colombia on a case study-basis. On the basis of (1) learning from our prior experience in both managing and conducting impact evaluations, (2) desk review of available documentation from the Monitoring & Evaluation system, and (3) structured interviews with government actors, evaluators and program managers, we benchmark each evaluation against eleven standards of quality. From this benchmarking exercise, we derive five key lessons for conducting high quality and influential impact evaluations: (1) investing in the preparation of good terms of reference and identification of evaluation questions; (2) choosing the best methodological approach to address the evaluation questions; (3) adopting mechanisms to ensure evaluation quality; (4) laying out the incentives for involved parties in order to foster evaluation buy-in; and (5) carrying out a plan for quality dissemination.”

Dealing with complexity through Planning, Monitoring & Evaluation

Mid-term results of a collective action research process.
Authors: Jan Van Ongevalle, Anneke Maarse, Cristien Temmink, Eugenia Boutylkova and Huib Huyse. Published January 2012
Praxis Paper 26, available as pdf

(Text from INTRAC website) “Written by staff from PSO and HIVA, this paper shares the first results of an ongoing collaborative action research in which ten development organisations explored different Planning, Monitoring and Evaluation (PME) approaches with the aim of dealing more effectively with complex processes of social change.

This paper may be of interest as:
1) It illustrates a practical example of action research whereby the organisations themselves are becoming the researchers.
2) Unpacking the main characteristics of complexity, the paper uses an analytic framework of four questions to assess the effectiveness of a PME approach in dealing with complex social change.
3) An overview is given of how various organisations implemented different PME approaches (e.g. outcome mapping, most significant change, client satisfaction instruments) in order to deal with complex change.
4) The paper outlines the meaning and the importance of a balanced PME approach, including its agenda, its underlying principles and values, its methods and tools and the way it is implemented in a particular context.”

%d bloggers like this: