LINKING MONITORING AND EVALUATION TO IMPACT EVALUATION

Burt Perrin, Impact Evaluation Notes  No. 2. April 2012 Rockefeller Foundation and Interaction. Available as pdf

Summary

This is the second guidance note in a four-part series of notes related to impact evaluation developed by InterAction with financial support from the Rockefeller Foundation.This second guidance note, Linking Monitoring and Evaluation to Impact Evaluation, illustrates the relationship between routine M&E and impact evaluation – in particular, how both monitoring and evaluation activities can support meaningful and valid impact evaluation. M&E has a critical role to play in impact evaluation, such as: identifying when and under what circumstances it would be possible and appropriate to undertake an impact evaluation; contributing essential data to conduct an impact evaluation, such as baseline data of various forms and information about the nature of the intervention; and contributing necessary information to interpret and apply findings from impact evaluation.

Contents
Introduction 1
1. How can monitoring and other forms of evaluation support impact evaluation?
1.1. Main characteristics of monitoring, evaluation, and impact evaluation
1.2. How M&E can contribute to impact evaluation
2. How to build impact evaluation into M&E thinking and practices
2.1. Articulate the theory of change
2.2. Identify priorities for undertaking impact evaluation
2.3. Identify information/data needs
2.4. Start with what you have
2.5. Design and implement the impact evaluation, analyze and interpret the findings
2.6. Use the findings
2.7. Review, reflect, and update
3. Engaging all parts of the organization
3.1. M&E: A core management function requiring senior management leadership and support
3.2. An active role for program staff is required
Summary
References and Other Useful Resources
Annex 1 – Contribution analysis

 

Analytic Rigour in Information Analysis – Lessons from the intelligence community?

This post was prompted by a blog posting by Irene Guijt about a presentation by Michael Patton at a workshop in Wageningen last week (which I also attended). The quotes below come from a webpage about Zelik, Patterson and Woods’ Rigour Attribute Model , which outlines eight attributes of a rigorous process of information analysis, along with guidance on recognising the extent to which each criteria has been met.

The model is summarised in this Analytical Rigor Poster (PDF)

Quotes from the website

“The proliferation of data accessibility has exacerbated the risk of shallowness in information analysis, making it increasingly difficult to tell when analysis is sufficient for making decisions or changing plans, even as it becomes increasingly easy to find seemingly relevant data. In addressing the risk of shallow analysis, the assessment of rigor emerges as an approach for coping with this fundamental uncertainty, motivating the need to better define the concept of analytical rigor.”

“Across information analysis domains, it is often difficult to recognize when analysis is inadequate for a given context. A better understanding of rigor is an analytic broadening check to be leveraged against this uncertainty. The purpose of this research is to refine the understanding of rigor, exploring the concept within the domain of intelligence analysis. Nine professional intelligence analysts participated in a study of how analytic rigor is judged. The results suggest a revised definition of rigor, reframing it as an emergent multi-attribute measure of sufficiency rather than as a measure of process deviation. Based on this insight, a model for assessing rigor was developed, identifying eight attributes of rigorous analysis. Finally, an alternative model of briefing interactions is proposed that integrates this framing of rigor into an applied context. This research, although specific in focus to intel analysis, shows the potential to generalize across forms of information analysis.

The references  provided include:

Zelik, D. J., Patterson, E. S., & Woods, D. D. (2010). Measuring attributes of rigor in information analysis. In E. S. Patterson & J. E. Miller (Eds.), Macrocognition metrics and scenarios: Design and evaluation for real-world teams. Aldershot, UK: Ashgate. (ISBN: 978-0-7546-7578-5) Currently, the best source for a detailed discussion of our ongoing research on analytical rigor is this forthcoming book chapter which proposes rigor as a macrocognitive measure of expert performance.

Zelik, D., Patterson, E. S., & Woods, D. D. (2007, June). Understanding rigor in information analysis. Paper presented at the 8th International Conference on Naturalistic Decision Making, Pacific Grove, CA. (PDF) (VIDEO) This paper, presented at the Eighth International Naturalistic Decision Making Conference, provides a more formal overview of our current research.

Modeling Rigor in Information Analysis: A Metric for Rigor Poster (PDF) This poster provides an overview of the rigor model, identifying the aspects of the attributes that contribute to low, moderate, and high rigor analysis processes. It also overviews the rigor metric as applied to the LNG Scenario study.

Reducing the Risk of Shallow Information Analysis Google TechTalk  David D. Woods’ discussion of our analytical rigor research at a Google TechTalk provides a dynamic presentation of the material. Google TechTalks are designed to disseminate a wide spectrum of views on topics including Current Affairs, Science, Medicine, Engineering, Business, Humanities, Law, Entertainment, and the Arts. This talk was originally recorded on on April 10, 2007.

THEORY OF CHANGE REVIEW – A report commissioned by Comic Relief

Cathy James, September 2011. 33 pages. Available as pdf.

The review approach Comic Relief’s international grants team commissioned this review to capture staff and partners’ experiences in using theory of change; to identify others in development that are using theory of change and analyse their different approaches; and to draw together learning from everyone to inform what Comic Relief does next.

The review combined analysis of literature with 32 short interviews of people with experience and knowledge of theory of change. The literature included reports, guidelines, study notes, theory of change examples and other relevant documents. The review included interviews with members of Comic Relief’s international grants team; Comic Relief grant partners (both UK and southern organisations); freelance consultants; UK organisation development consultants and researchers; North American research organisations, consultancy groups and foundations; International Nongovernmental organisations (INGOs); and academics.
This report was commissioned by Comic Relief and written by Cathy James, an independent consultant. The views expressed in this report are those of the author and do not necessarily represent the views of Comic Relief

Contents

A. INTRODUCTION

A1. Why do this review?

A2. How was the review approached?

A3. What does the review cover?

B. WHAT IS THEORY OF CHANGE?

B1. What are the origins of theory of change?

B2. Who is interested in theory of change?

B3. What do people mean by theory of change?

B4. What approaches are people taking to theory of change?

B5. How is theory of change different and how does it fit with other processes?

C. HOW IS COMIC RELIEF USING THEORY OF CHANGE?

C1. How has Comic Relief’s international grants team used theory of change?

C2. How have Comic Relief partners used theory of change?

D. WHAT DIFFERENCE HAS THEORY OF CHANGE MADE?

DI. What difference has theory of change made to Comic Relief partners?

D2. What do others say about the benefits of using theory of change?

E. WHAT HAS BEEN LEARNED ABOUT USING THEORY OF CHANGE?

E1. Who is theory of change most useful for?

E2. What kind of approach has been most helpful?

E3. What have been the main challenges?

F. CONCLUSIONS AND RECOMMENDATIONS

F1. Hot topics

F2. Conclusions

F3. Some suggestions for those using or advocating theory of change to think about

 

 

Peacebuilding with impact: Defining Theories of Change

Care International UK, January 2012. 12 pages. Available as pdf

Executive Summary: “Focusing on theories of change can improve the effectiveness of peacebuilding interventions. A review of 19 peacebuilding projects in three confict-affected countries found that the process of articulating and reviewing theories of change adds rigour and transparency, clarifes project logic, highlights assumptions that need to be tested, and helps identify appropriate participants and partners. However, the approach has limitations, including the diffculty of gathering theory validating evidence.

While they are not a panacea, devoting greater attention to theories of change is a simple and relatively inexpensive means of increasing the quality of peacebuilding interventions. Donors and peacebuilding agencies should review their procedures to encourage and accommodate more widespread focus on theories of change, and ensure adequate resources are set aside to allow appropriate monitoring of these theories throughout the life of an intervention.

A focus on theories of change led to the following key fndings:
• Clarifying project logic helps highlight tenuous assumptions;
• Clearly identifying the aims of activities and measures of success strengthens project design;
• Determining the appropriate actors to work with, and not just the easy-to-reach, enables better programme focus;
• More explicit links need to be made between local level activities and national peace processes for desired changes to occur;
• Confict analysis is critical for determining the relevance of activities but is rarely done;
• Staff often require support in ensuring their theories of change are suffciently explicit;
• Current project planning tools do not help practitioners articulate their theories of change;
• Gathering evidence to validate a theory of change is challenging, particularly in conditions of conflict and fragility;
• Critical review of theories of change needs to be undertaken in conjunction with other forms of evaluation to have maximum value;
• Theories of change can encourage an overly linear approach, when change in con?ict contexts can be more organic or systemic.

Recommendations:
1 Donors should revise their logical frameworks guidance to encourage the use of theories of change, notably to include them within the ‘assumptions and risks’ column of existing logical frameworks or by adding an additional column.
2 Theories of change need to be as precise, nuanced and contextually specific as possible and be based on broad conflict analysis.
3 Practitioners need to articulate theories of change within a hierarchy of results and to review these periodically throughout the implementation of a project, particularly if conflict dynamics change.
4 Donors should encourage funded agencies to review their theories of change throughout the project cycle and make resources available for this.”

Assessing the immediate impact of impact studies – using an online survey

On February 23rd, the Stanford Social Innovation Review asked its readers to predict the results of two randomised control trials  (RCTs), before they become publicly available. Both studies “tested whether consulting services can help enterprises grow. In other words, with nothing more than advice, can small firms or microenterprises increase their profits? Or are they already optimizing, given their resources?

The website provides some background information on both interventions and the aims of each study. It also provides four different possible outcomes of the study, for participants to choose from. A modest prize is offered for participants who correctly predict the study findings.

The authors provide this description of their intentions: ” With this experiment, we also are taking a baby step toward a more ambitious idea—to have a market in predicting the results of randomized trials. Such a market would serve two purposes. First, it would allow stakeholders to stake their claim (pun intended) on their predictions and be held to acclaim when they are right or to have their opinions challenged when they are wrong. Second, such a market could help donors, practitioners, and policymakers make decisions about poverty programs, by engaging the market’s collective wisdom. (Think www.intrade.com, but for results of social impact interventions.)

The last sentence seems to imply that the market, correctly designed and managed, will deliver successful predictions. This has been found to be the case in some other fields, but it may or may not be the case with the results of RCT trials.

There is another potentially valuable use of the same process. A “pre-dissemination of results” survey would establish a baseline measure of public understanding in the field under investigation [with the caveat that the profile of the particular participating” public” would need to be made clear]. For example, 30% of survey participants may have successfully predicted that Outcome 1 would be supported by the RCT findings. After the RCT findings were shared with participants a follow survey of the same participants could easily then ask something like “Do you accept the validity of the findings?” or some thing more general like “Have these results been sufficient to change your mind on this issue?” The percentage of participants who made wrong predictions but accepted the study results would then be a reasonable measure of immediate impact. [Fortunately the SSIR survey includes a request for participant email addresses, which are necessary if they are to receive their prize].

Bearing this in mind, it would be good if the Review could provide its readers with some analysis of the overall distribution of the predictions made by participants, not just information on who the winner was.

PS: The same predict-disclose-compare process can also be used in face to face settings such as workshops designed to disseminate the  findings of impact assessments, and has undoubtedly beeen used by others before today [including by myself with Proshika staff in Bangladesh, many years ago]

[Thanks to @carolinefiennes for alerting me to this article]

PS 14 March 2012: See Posting Hypotheses for an Impact Study of Compartamos by Dean Karlan   where one of his objectives is to be able to compare found results with prior opinions

Evaluating the Evaluators: Some Lessons from a Recent World Bank Self-Evaluation

February 21, 2012 blog posting by Johannes Linn, at Brookings
Found via @WorldBank_IEG tweet

“The World Bank’s Independent Evaluation Group (IEG) recently published a self-evaluation of its activities. Besides representing current thinking among evaluation experts at the World Bank, it also more broadly reflects some of the strengths and gaps in the approaches that evaluators use to assess and learn from the performance of the international institutions with which they work…. Johannes Linn served as an external peer reviewer of the self-evaluation and provides a bird’s-eye view on the lessons learned.

Key lessons as seen by Linn

  • An evaluation of evaluations should focus not only on process, but also on the substantive issues that the institution is grappling with.
  • An evaluation of the effectiveness of evaluations should include a professional assessment of the quality of evaluation products.
  • An evaluation of evaluations should assess:
    o How effectively impact evaluations are used;
    o How scaling up of successful interventions is treated;
    o How the experience of other comparable institutions is utilized;
    o Whether and how the internal policies, management practices and incentives of the institution are effectively assessed;
    o Whether and how the governance of the institution is evaluated; and
    o Whether and how internal coordination, cooperation and synergy among units within the organizations are assessed

Read the complete posting, with arguments behind each of the above points, here

BEHIND THE SCENES: MANAGING AND CONDUCTING LARGE SCALE IMPACT EVALUATIONS IN COLOMBIA

by Bertha Briceño, Water and Sanitation Program, World Bank; Laura Cuesta, University of Wisconsin-Madison, Orazio Attanasio, University College London
December 2011, 3ie Working Paper 14, available as pdf

“Abstract: As more resources are being allocated to impact evaluation of development programs,the need to map out the utilization and influence of evaluations has been increasingly highlighted. This paper aims at filling this gap by describing and discussing experiences from four large impact evaluations in Colombia on a case study-basis. On the basis of (1) learning from our prior experience in both managing and conducting impact evaluations, (2) desk review of available documentation from the Monitoring & Evaluation system, and (3) structured interviews with government actors, evaluators and program managers, we benchmark each evaluation against eleven standards of quality. From this benchmarking exercise, we derive five key lessons for conducting high quality and influential impact evaluations: (1) investing in the preparation of good terms of reference and identification of evaluation questions; (2) choosing the best methodological approach to address the evaluation questions; (3) adopting mechanisms to ensure evaluation quality; (4) laying out the incentives for involved parties in order to foster evaluation buy-in; and (5) carrying out a plan for quality dissemination.”

Dealing with complexity through Planning, Monitoring & Evaluation

Mid-term results of a collective action research process.
Authors: Jan Van Ongevalle, Anneke Maarse, Cristien Temmink, Eugenia Boutylkova and Huib Huyse. Published January 2012
Praxis Paper 26, available as pdf

(Text from INTRAC website) “Written by staff from PSO and HIVA, this paper shares the first results of an ongoing collaborative action research in which ten development organisations explored different Planning, Monitoring and Evaluation (PME) approaches with the aim of dealing more effectively with complex processes of social change.

This paper may be of interest as:
1) It illustrates a practical example of action research whereby the organisations themselves are becoming the researchers.
2) Unpacking the main characteristics of complexity, the paper uses an analytic framework of four questions to assess the effectiveness of a PME approach in dealing with complex social change.
3) An overview is given of how various organisations implemented different PME approaches (e.g. outcome mapping, most significant change, client satisfaction instruments) in order to deal with complex change.
4) The paper outlines the meaning and the importance of a balanced PME approach, including its agenda, its underlying principles and values, its methods and tools and the way it is implemented in a particular context.”

World Bank – Raising the Bar on Transparency, Accountability and Openness

Blog posting by Hannah George on Thu, 02/16/2012 – 18:01 Found via @TimShorten

“The World Bank has taken landmark steps to make information accessible to the public and globally promote transparency and accountability, according to the first annual report on the World Bank’s Access to Information (AI) Policy.[20/02/2012 – links is not working – here is a link to a related doc, World Bank Policy on Access to Information Progress Report : January through March 2011]

“The World Bank’s Access to Information Policy continues to set the standard for other institutions to strive for,” said Chad Dobson, executive director of the Bank Information CenterPublish What You Fund recently rated the Bank “best performer” in terms of aid transparency out of 58 donors for the second year in a row.  Furthermore, the Center for Global Development and Brookings ranked the International Development Association (the World Bank’s Fund for the Poorest) as a top donor in transparency and learning in its 2011 Quality of Official Development Assistance Assessment (QuODA).

Making systematic reviews work for international development research

ODI Discussion paper, January 2012 4 pages

Authors: Jessica Hagen-Zanker, Maren Duvendack, Richard Mallett and Rachel Slater with Samuel Carpenter and Mathieu Tromme

This briefing paper reflects upon the use of systematic reviews in international development research. It attempts to identify where a systematic review approach adds value to development research and where it becomes problematic.

The question of ‘what works’ in international development policy and practice is becoming ever more important against a backdrop of accountability and austerity. In order to answer this question, there has been a surge of interest in ‘evidence-informed policy making’.

Systematic reviews are a rigorous and transparent form of literature review, and are increasingly considered a key tool for evidence-informed policy making. Subsequently, a number of donors – most notably the UK Department for International Development (DFID) and AusAid – are focusing attention and resources on testing the appropriateness of systematic reviews in assessing the impacts of development and humanitarian interventions.

This briefing paper reflects upon the use of systematic reviews in international development research and argues:

  • Using systematic review principles can help researchers improve the rigour and breadth of literature reviews
  • Conducting a full systematic review is a resource intensive process and involves a number of practical challenges
  • Systematic reviews should be viewed as a means to finding a robust and sensible answer to a focused research question

3ie have subsequently provided this Commentary

There has also been a discussion on ODI Blog Posts, 27 January 2012

See also the DFID Nov 2011 background page on “Systematic Reviews in International Development : An Initiative to Strengthen Evidence-Informed Policy Making

 

%d bloggers like this: