Theory of Change: A thinking and action approach to navigate in the complexity of social change processes

Iñigo Retolaza Eguren, HIVOS/DD/UNDP, May 2011 Available as pdf.

“This guide has been jointly published by Hivos and UNDP, and is aimed at the rich constellation of actors linked to processes of social development and change: bilateral donors, community leaders, political and social leaders, NGO’s representatives, community-base organizations, social movements, public decision makers, and other actors related to social change processes.

The Theory of Change approach applied to social change processes represents a thinking-action alternative to other more rigid planning approaches and logics. When living in complex and conflictive times, we need to count with more flexible instruments that allow us to plan and monitor our actions in uncertain, emergent, and complex contexts from a flexible and non-rigid logic. As known, this thinking-action approach is also applied to institutional coaching processes and to the design of social development and change programs.

In general terms, the Guide synthesizes the core of the methodological contents and steps that are developed in a Theory of Change design workshop. The first part of the Guide describes some theoretical elements to consider when designing a Theory of Change applied to social change processes. The second part describes the basic methodological steps to develop in every design of a Theory of Change. For reinforcing this practical part, a workshop route is included, illustrating the dynamics in a workshop of this kind.

The approach and contents of the guide emerge from the learning synthesis of the author, Iñigo Retolaza, as facilitator of Theory of Change design processes where social change actors from several Latin American countries have been involved. His two main bodies of experience and knowledge are: (i) the learning space offered by Hivos, where he could facilitate several Theory of Change workshops with Hivos partner organisations in South and Central America, and (ii) his professional relation with the Democratic Dialogue Regional Project of UNDP, from a research-action approach around dialogic processes applied to various areas of the socio-political field: national dialogues on public policy making and adjusting and legislative proposals, facilitation of national and regional dialogue spaces on several issues, capacity building on dialogue for social and political leaders from several countries in the region”

 

 

Capturing Change in Women’s Realities A Critical Overview of Current M&E Frameworks and Approaches

by Srilatha Batliwala and Alexandra Pittman. Association for Women’s Rights in Development (AWID) Dec 2010. Available as pdf Found courtesy of @guijti

“The two part document begins with a broad overview of common challenges with monitoring and evaluation (M&E) and identifies feminist practices for engaging in M&E to strengthen organizational learning and more readily capture the complex changes that women’s empowerment and gender equality work seek. The document concludes with an overview and in-depth analysis of some of the most widely used and recognized M&E frameworks, approaches, and tools.”

[RD Comment: A bit of text that interested me…”Some women’s rights activists and their allies consequently propose that we need to develop a “theory of constraints” to accompany our “theory of change” in any given context, in order to create tools for tracking the way that power structures are responding to the challenges posed by women’s rights interventions“. ….[and before then, also on page 12] … most tools do not allow for tracking negative change, reversals, backlash, unexpected change, and other processes that push back or shift the direction of a positive change trajectory. How do we create tools that can capture this “two steps forward, one step back” phenomenon that many activists and organizations acknowledge as a reality and in which large amounts of learning lay hidden? In women’s rights work, this is vital because as soon as advances seriously challenge patriarchal or other social power structures, there are often significant reactions and setbacks. These are not, ironically, always indicative of failure or lack of effectiveness, but exactly the opposite— this is evidence that the process was working and was creating resistance from the status quo as a result .”

This useful proposal could apply to other contexts where change is expected to be difficult]

GTZ/BMZ Evaluation and Systems Conference papers

(via Bob Williams on EvalSys)

Systemic Approaches in Evaluation

Documentation of the Conference on 25-26 January 2011

“Development programs promote complex reforms and change processes. Such processes are often characterized by insecurity and unpredictability, posing a big challenge to the evaluation of development projects. In order to understand which projects work, why and under which conditions, evaluations also need to embrace the interaction of various influencing factors and the multi-dimensionality of societal change. However, present evaluation approaches often premise predictability and linearity of event chains.

In order to fill this gap, systemic approaches in evaluation of development programs are increasingly being discussed. A key concept is interdependency instead of linear cause-effect-relations. Systemic approaches in evaluation focus on interrelations and the interaction between various stakeholders with different motivations, interests, perceptions and perspectives.

On January 25 and 26, 2011 the Evaluation and Audit Division of the Federal Ministry of Economic Cooperation and Development (BMZ) and the Evaluation Unit of GIZ offered a forum to discuss systemic approaches to evaluation at an international conference.
More than 200 participants from academia, consulting firms and NGOs discussed, amongst others, the following questions:

  • What are systemic approaches in evaluation?
  • For which kind of evaluations are systemic approaches (not) useful? Can they be used to enhance accountability, for example?
  • Are rigorous impact studies and systemic evaluations antipodes or can we combine elements of both approaches?
  • Which concrete methods and tools can be used in systemic evaluation?

On this website you will find the documentation of all sessions, speeches and discussion rounds. The main conclusions of the conference were summarized in the  final panel discussion.”

 

UK Independent Commission for Aid Impact – Work Plan

Independent Commission for Aid Impact – Work Plan, and the associated Press Release (12 May 2011)

1. This document introduces the Independent Commission for Aid Impact’s first work plan, setting out the reports we envisage initiating over the next three years, from May 2011 to May 2014.

2. Our mandate permits us to examine all UK Government programmes
funded by Official Development Assistance expenditure. In 20091, this
represented £7.4bn, which was spent through bilateral, joint and
multilateral processes by the Department for International Development (DFID) and at least eight other branches of Government. Under the Government’s current plans and guided by its recent reviews of bilateral, multilateral and humanitarian work, this expenditure is due to rise significantly and will change in focus. This range of projects and programmes gives us significant discretion in choosing where to focus the attention of our reports.” ..continues..

See also: FRAMEWORK AGREEMENT BETWEEN THE DEPARTMENT FOR INTERNATIONAL DEVELOPMENT (DFID) AND THE INDEPENDENT COMMISSION FOR AID IMPACT (ICAI) This document sets out the broad framework within which the ICAI will operate as a permanent body (12 May 2011 – 11 May 2015). The Agreement is signed by the Chief Commissioner of the ICAI and DFID. This document, and any future revisions, will be made public on the ICAI website.

[RD Comments: The workplan has three strands of work:

  • Evaluations: are likely to focus on the sustainable development impact achieved by programmes against initial or updated objectives
  • Value for money reviews: will consider whether objectives have been achieved with the optimal use of resources
  • Investigations: could range from general fact-finding in response to external requests, to assessments of compliance with legal and policy responsibilities and examinations of alleged corruption cases.

Regarding the first strand, the OECD DAC definition of impact is “Positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended.” One practical way of defining long term would be as any change observed after the completion of a project (typically 3 years). This would seem to be an appropriate focus for the ICAI because in the past DFID has undertaken very few ex-post evaluations. There is a gap here that needs to be addressed, as there is with quite a few other bilateral agencies.

A further justification lies in the useful connection with value for money reviews. Some organisations, like Global Environmental Facility, define impact as “A fundamental and durable change in the condition of people and their environment brought about by the project”  in other words, a sustained change.  The longer a change is sustained (all other things being equal) the more value for money would seem to have been realised. Assessing impact in the short term (i.e. during the project implementation period) risks understating impact and the associated value for money.

The question that then arises in my mind is to what extent will the ICAI program of evaluations be focused on projects that have been completed, versus those which are still being implemented. I will be asking the ICAI if they could  provide an answer, for example in the form of the percentage of completed versus incompleted projects to be examined in each of the 8 evaluations to be undertaken in year 1]

PS 24 May 2011: See also Howard White’s related question about the ICAI’s use of ex-post and ex-ante evaluations. There he seems to be arguing against ex-post evaluations: “There is a question as to whether the commission restricts itself to ex-post evaluations, done once the intervention is being implemented or completed. Or can it engage in ex-ante designs before the intervention has started? Designing the evaluation prior to the launch of a programme, and collecting baseline data, generally delivers more robust findings.”

This seems like the method tail wagging the programme development dog. Or, looking for a lost wallet under a lamppost.The potential for rigour should not determine what gets evaluated. What gets evaluated should be decided by more strategic considerations. Like the fact that we know very little about the long term effects of most development projects (where long term = after the project interevention ceases).

WRITING TERMS OF REFERENCE FOR AN EVALUATION: A HOW-TO GUIDE

Independent Evaluation Group, World Bank 2011. Available as pdf.

“The terms of reference (ToR) document defines all aspects of how a consultant or a team will conduct an evaluation. It defines the objectives and the scope of the evaluation, outlines the responsibilities of the consultant or team, and provides a clear description of the resources available to conduct the study. Developing an accurate and wellspecified ToR is a critical step in managing a high-quality evaluation. The evaluation ToR document serves as the basis for a contractual arrangement with one or more evaluators and sets the parameters against which the success of the assignment can be measured.

The specific content and format for a ToR will vary to some degree based on organizational requirements, local practices, and the type of
assignment. However, a few basic principles and guidelines inform the development of any evaluation ToR. This publication provides userfriendly guidance for writing ToRs by covering the following areas:

1. Definition and function. What is a ToR? When is one needed? What are its objectives? This section also highlights how an evaluation ToR is different from other ToRs.
2. Content. What should be included in a ToR? What role(s) will each of the sections of the document serve in supporting and facilitating the completion of a high-quality evaluation?
3. Preparation. What needs to be in place for a practitioner or team to develop the ToR for an evaluation or review?
4. Process. What steps should be taken to develop an effective ToR? Who should be involved for each of these steps?

A quality checklist and some Internet resources are included in this publication to foster good practice in writing ToRs for evaluations and reviews of projects and programs. The publication also provides references and resources for further information.”

[RD Comment: See also: Guidance on Terms of Reference for an Evaluation: A List, listing ToRs guidance documents produced by 9 different organisations]

Randomised controlled trials, mixed methods and policy influence in international development – Symposium

Thinking out of the black box. A 3ie-LIDC Symposium
Date: 17:30 to 19:30 Monday, May 23rd 2011
Venue: John Snow Lecture Theatre, London School of Hygiene and Tropical Medicine (LSHTM) Keppel Street, London, WC1E 7HT

Professor Nancy Cartwright, Professor of Philosophy, London School of Economics
Professor Howard White, Executive Director, 3ie
Chair: Professor Jeff Waage, Director, LIDC

Randomised  Controlled  Trials  (RCTs)  have  moved  to  the  forefront  of  the development  agenda  to  assess  development  results  and  the  impact  of development  programs.  In  words  of  Esther  Duflo  –  one  of  the  strongest advocates of RCTs – RCTs allow us to know which development efforts help and which cause harm.

But  RCTs  are  not  without  their  critics,  with  questions  raised  about  their usefulness, both  to provide more substantive  lessons about  the program being evaluated and whether the findings can be generalized to other settings.

This symposium brings perspectives from the philosophy of science, and a mixed method approach to impact analysis, to this debate.

ALL WELCOME
For more information contact: 3ieuk@3ieimpact.org

PS1: Nancy Cartwright wrote “Are RCTs the Gold Standard?” in 2007

PS2: The presentation by Howard White is now available here  – http://tinyurl.com/3dwlqwn but without audio

AusAID’s Information Publication Scheme: Draft Plan & Consultation

The 12th April 2011 Draft plan is now available in pdf and MS Word

Introduction

“AusAID is the Australian Government’s Agency for International Development, an executive agency within the Department of Foreign Affairs and Trade portfolio. Its primary role is the implementation and oversight of the Australian Government aid program. The aim of the program is to assist
developing countries reduce poverty and achieve sustainable development, in line with Australia’s national interest.

Reforms to the Freedom of Information Act 1982 (FOI Act) have established the Information Publication Scheme (IPS). The purpose of the IPS is to give the Australian community access to information held by the Australian Government and enhance and promote Australia’s representative
democracy by increasing public participation in government processes and increasing scrutiny, discussion, comment and review of government activities and decisions.

AusAID is committed to greater transparency through the implementation of the Information Publication Scheme (IPS) and other initiatives that will introduced. As Australia’s ODA commitment has increased, public interest in the aid program has correspondingly increased and this will
continue. Implementation of the IPS will provide more information to Australians about AusAID’s activities and help increase public participation understanding and scrutiny of Australia’s aid program.

This draft plan has been prepared to assist AusAID implement the IPS, in accordance with section 8(1) of the Freedom of Information Act (FOI) 1982 and to give the Australian public the opportunity to comment and provide feedback on this plan.

As AusAID’s final plan is implemented it will be progressively updated in light of experience and feedback. The list of documents that is a core part of this plan will, in particular, be amended.”

The consultation: Visit this AusAid website to see how to participate and to read the views of others who have already contributed.

 

Towards a Plurality of Methods in Project Evaluation: A Contextualised Approach to Understanding Impact Trajectories and Efficacy

Michael Woolcock, January 2009, BWPI Working Paper 73

Abstract
“Understanding the efficacy of development projects requires not only a plausible counterfactual, but an appropriate match between the shape of impact trajectory over time and the deployment of a corresponding array of research tools capable of empirically discerning such a trajectory. At present, however, the development community knows very little, other than by implicit assumption, about the expected shape of the impact trajectory from any given sector or project type, and as such is prone to routinely making attribution errors. Randomisation per se does not solve this problem. The sources and manifestations of these problems are considered, along with some constructive suggestions for responding to them. ”

Michael Woolcock is Professor of Social Science and Development Policy, and Research Director of the Brooks World Poverty Institute, at the University of Manchester.

[RD Comment: Well worth reading, more than once]

PS: See also the more recent “Guest Post: Michael Woolcock on The Importance of Time and Trajectories in Understanding Project Effectiveness” on the Development Impact blog, 5th May 2011

Impact Evaluation in Practice

Paul J. Gertler, Sebastian Martinez, Patrick Premand, Laura B. Rawlings, Christel M. J. Vermeersch, World Bank, 2011

Impact Evaluation in Practice is available as downloadable pdf, and can be bought online.

“Impact Evaluation in Practice presents a non-technical overview of how to design and use impact evaluation to build more effective programs to alleviate poverty and improve people’s lives. Aimed at policymakers, project managers and development practitioners, the book offers experts and non-experts alike a review of why impact evaluations are important and how they are designed and implemented. The goal is to further the ability of policymakers and practitioners to use impact evaluations to help make policy decisions based on evidence of what works the most effectively.

The book is accompanied by a set of training material — including videos and power point presentations — developed for the “Turning Promises to Evidence” workshop series of the Office of the Chief Economist for Human Development. It is a reference and self-learning tool for policy-makers interested in using impact evaluations and was developed to serve as a manual for introductory courses on impact evaluation as well as a teaching resource for trainers in academic and policy circles.

CONTENTS
PART ONE. INTRODUCTION TO IMPACT EVALUATION
Chapter 1. Why Evaluate?
Chapter 2. Determining Evaluation Questions
PART TWO. HOW TO EVALUATE
Chapter 3. Causal Inference and Counterfactuals
Chapter 4. Randomized Selection Methods
Chapter 5. Regression Discontinuity Design
Chapter 6. Difference-in-Differences
Chapter 7. Matching
Chapter 8. Combining Methods
Chapter 9. Evaluating Multifaceted Programs
PART THREE. HOW TO IMPLEMENT AN IMPACT EVALUATION
Chapter 10. Operationalizing the Impact Evaluation Design
Chapter 11. Choosing the Sample
Chapter 12. Collecting Data
Chapter 13. Producing and Disseminating Findings
Chapter 14. Conclusion

Evaluation Revisited – Improving the Quality of Evaluative Practice by Embracing Complexity

Utrecht Conference Report. Irene Guijt, Jan Brouwers, Cecile Kusters, Ester Prins and Bayaz Zeynalova. March 2011. Available as pdf

This report summarises the outline and outputs of the Conference ‘Evaluation Revisited: Improving the Quality of Evaluative Practice by Embracing Complexity’’, which took place on May 20-21, 2010. It also adds additional insights and observations related to the themes of the conference, which emerged in presentations about the conference at specific events.

Contents (109 pages):

1 What is Contested and What is at Stake
1.1 Trends at Loggerheads
1.2 What is at Stake?
1.3 About the May Conference
1.4 About the Report
2 Four Concepts Central to the Conference
2.1 Rigour
2.2 Values
2.3 Standards
2.4 Complexity
3 Three Questions and Three Strategies for Change
3.1 What does ‘evaluative practice that embraces complexity’ mean in practice?
3.2 Trade-offs and their Consequences
3.3 (Re)legitimise Choice for Complexity
4 The Conference Process in a Nutshell

%d bloggers like this: