Impact Evaluation for Development: Principles for Action

IE4D Group, January 2011. Available as pdf

“The authors of this paper come from a variety of perspectives. As scholars, practitioners, and commissioners of evaluation in development, research and philanthropy, our thematic interests, disciplines, geographic locale, and experiences may differ but we share a fundamental belief that evaluative knowledge has the potential to contribute to positive social change.

We know that the full potential of evaluation is not always (or even often) realized in international development and philanthropy. There are many reasons for this – some to do with a lack of capacity, some methodological, some due to power imbalances, and some the result of prevailing incentive structures. Evaluation, like development, needs to be an open and dynamic enterprise. Some of the current trends in evaluation, especially in impact evaluation in international development, limit unnecessarily the range of approaches to assessing the impact of development initiatives.

We believe that impact evaluation needs to draw from a diverse range of approaches if it is to be useful in a wide range of development contexts, rigorous, feasible, credible, and ethical.

Developed with support from the Rockefeller Foundation this article is a contribution to ongoing global and regional discussions about ways of realizing the potential of impact evaluation to improve development and strengthening our commitment to work towards it.”

Patricia Rogers is Professor of Public Sector Evaluation at the Royal Melbourne Institute of Technology, Australia. Her work focuses on credible and useful evaluation methods, approaches and systems for complicated and complex programs and policies.
Sanjeev Khagram is a professor of public affairs and international studies at the University of Washington as well as the Lead Steward of Innovations for Scaling Impact (iScale).
David Bonbright is founder and Chief Executive of Keystone (U.K., U.S. and South Africa), which helps organizations develop new ways of planning, measuring and reporting social change. He has also worked for the Aga Khan Foundation, Ford Foundation and Ashoka.
Sarah Earl is Senior Program Specialist in the Evaluation Unit at the International Development Research Centre (Canada). Her interest is ensuring that evaluation and research realize their full potential to contribute to positive social change.
Fred Carden is Director of Evaluation at the International Development Research Centre (Canada). His particular expertise is in the development and adaptation of evaluation methodology for the evaluation of development research.
Zenda Ofir is an international evaluation specialist, past President of the African Evaluation Association (AfrEA), former board member of the American Evaluation Association and the NONIE Steering Committee, and evaluation advisor to a variety of international organizations.
Nancy MacPherson is the Managing Director for Evaluation at the Rockefeller Foundation based in New York. The Foundation’s Evaluation Office aims to strengthen evaluative practice in philanthropy and development by supporting rigorous, innovative and context appropriate approaches to evaluation and learning.

Commons Select Committee to Scrutinise the DFID’s Annual Report & Resource Accounts

13 September 2011

“The International Development Committee is to conduct an inquiry into the Department for International Development’s Annual Report and Accounts 2010-11 and the Department’s Business Plan 2011-15.

Invitation to submit Written Evidence

The Committee will be considering

  • Changes since the election to DFID’s, role, policies, priorities and procedures;
  • The implications of changes for management styles, structures, staffing competences and capacity to deliver; and
  • The overall impact on the efficiency, effectiveness and cost-effectiveness of DFID’s activities.

The Committee invites short written submissions from interested organisations and individuals, especially on the following areas: the implementation of the structural reform plan: the bilateral, multilateral and humanitarian reviews; DFID administration costs; expenditure on, and dissemination of research; and the use of technical assistance and consultants.

The deadline for submitting written evidence is Monday 10 October 2011. A guide for written submissions to Select Committees may be found on the parliamentary website at: http://www.parliament.uk/commons/selcom/witguide.htm

FURTHER INFORMATION:
Committee Membership is as follows: Malcolm Bruce MP, Chair (Lib Dem, Gordon), Hugh Bayley MP (Lab, City of York), Richard Burden MP (Lab, Birmingham, Northfield), Sam Gyimah MP (Con, East Surrey), Richard Harrington MP (Con, Watford), Pauline Latham MP (Con, Mid Derbyshire), Jeremy Lefroy MP (Con, Stafford), Michael McCann MP (Lab, East Kilbride, Strathaven and Lesmahagow), Alison McGovern MP (Lab, Wirral South), Anas Sarwar MP (Lab, Glasgow Central), Chris White MP (Con, Warwick and Leamington).
Specific Committee Information: indcom@parliament.uk / 020 7219 1223/ 020 7219 1221
Media Information: daviesnick@parliament.uk / 020 7219 3297 Committee Website: www.parliament.uk/indcom

The Oslo Governance Forum – Governance Assessments for Social Accountability

 

Date: 3-5 October 2011
Venue: Oslo, Norway 

ABOUT THE OSLO GOVERNANCE FORUM

The Oslo Governance Forum (OGF) is an initiative of the Oslo Governance Centre and the Democratic Governance Group of UNDP. The Forum will facilitate exchange of innovative experiences, knowledge and policy options among international development practitioners, academic institutions, government representatives and civil society from the global south.

The Oslo Governance Forum will take place from the 3-5 October 2011. The overarching focus is on governance assessments and their current and potential contribution to improving social accountability within developing countries. For the purposes of the OGF, social accountability has a wide meaning and relates to the mechanisms and instruments that are used by communities, groups and ordinary people to make governments and their agents, answerable and responsive in terms of the commitments that they have made. Governance assessments are an increasingly important tool for monitoring whether governments are failing or succeeding in terms of their commitments in legislation, government policies and international law.

To date, much of the focus of the development community on governance assessments, has been on the “supply side”, that is, improving the methodological aspects of an assessment and getting the right indicators. The OGF will focus on the “demand side”, examining, discussing and sharing experiences on how governance assessments are used by stakeholders as a basis for dialogue on governance deficits, as an instrument to monitor performance and as an input for revising and correcting policies. One of the key elements of democratic governance and accountability is empowerment of the people and the fostering of demand- and people driven accountability as opposed to accountability to external actors like donors.

The world is changing rapidly and never before has democratic governance and accountability been so visibly important on the global stage. The Arab Spring revolutions have shown that governments must take people’s calls for accountability and their rights to be governed democratically more seriously. These events have also added to the growing number of case studies that attest to the potential of social media and related technologies for mobilizing people for change.”

See the Forum home page for more information as well as the Concept Note for the Forum

 

 

Evaluation of Governance – A Study of the Government of India’s Outcome Budget

by Anand P. Gupta,  Economic Management Institute, New Delhi, India
in Journal of Development Effectiveness, 2:4, 566-573, December 2010.

[Found courtesy of Public Financial Management Blog]

“In 2005, the Government of India launched an apparently excellent initiative – the Outcome Budget – with the objective of changing the culture of measuring performance in terms of the amount of money spent against the budgeted allocations, to one of measuring performance in terms of the delivery of the outcomes that people are concerned with. This paper describes how the Outcome Budget was launched, articulates the theory of change underlying the Outcome Budget, presents a case study of the Outcome Budget of the Government of India’s Accelerated Power Development and Reforms Programme, and discusses the lessons that the Government of India may learn from its experience with the Outcome Budget.

The paper argues that the Outcome Budget has failed. This has happened because the assumptions of the theory of change underlying the Outcome Budget have not been satisfied. The failure of the Outcome Budget has extremely important lessons for the Independent Evaluation Office, which the Government of India has decided to set up. The paper articulates the theory of change underlying the Independent Evaluation Office. This theory assumes that policymakers in India currently demand rigorous impact evaluations of public interventions and will continue to demand such evaluations in future, not because they have to comply with any requirement but because they really want to know the answers to the impact evaluation questions of ‘what works, under what conditions does it work, for whom, what part of a given intervention works, and for how much?’, so that they may draw appropriate lessons from these answers and use these lessons while designing and implementing public interventions in future.  However, given Indian public officials’ current culture, the Independent Evaluation Office may not make any visible difference in development effectiveness in India.

The paper, published in Journal of Development Effectiveness, Volume 2, Number 4 (December 2010), is amongst the Journal’s “most read” (downloaded) papers, and is currently on the free download list of most read papers.”

Measuring Impact on the Immeasurable? Methodological Challenges in Evaluating Democracy and Governance Aid

by Jennifer Gauck, University of Kent, Canterbury – Department of Politics, 2011. APSA 2011 Annual Meeting Paper. Available as pdf

Abstract:

“Recent debates over the quality, quantity and purpose of development aid has led to a renewed emphasis on whether, and in what circumstances, aid is effective in achieving development outcomes. A central component of determining aid effectiveness is the conduct of impact evaluations, which assess the changes that can be attributed to a particular project or program. While many impact evaluations use a mixed-methods design, there is a perception that randomized control trials (RCTs) are promoted as the “gold standard” in impact evaluation. This is because the randomization process minimizes selection bias, allowing for the key causal variables leading to the outcome to be more clearly identified. However, many development interventions cannot be evaluated via RCTs because the nature of the intervention does not allow for randomization with a control group or groups.”

“This paper will analyze the methodological challenges posed by aid projects whose impacts cannot be evaluated using randomized control trials, such as certain democracy and governance (D&G) interventions. It will begin with a discussion of the merits and drawbacks of cross-sectoral methods and techniques commonly used to assess impact across a variety of aid interventions, including RCTs, and how these methods typically combine in an evaluation to tell a persuasive causal story. This paper will then survey the methods different aid donors are using to evaluate the impact of projects that cannot be randomized, such as governance-strengthening programs aimed at a centralized public-sector institution. Case studies will be drawn from examples in Peru and Indonesia, among others. This paper will conclude by analyzing how current methodological emphases in political science can be applied to impact evaluation processes generally, and to D&G evaluations specifically.”

RD Comment: See also the 3ie webpage on Useful resources for impact evaluations in governance which includes a list of relevant books, reports, papers, impact evaluations, systematic reviews, survey modules/tools and website

2012 European Evaluation Society Conference in Helsinki

Date: OCTOBER 1-5, 2012
Venue: HELSINKI, Finland

Conference website

EVALUATION IN THE NETWORKED SOCIETY: NEW CONCEPTS, NEW CHALLENGES, NEW SOLUTIONS

The Tenth Biennial Conference of the European Evaluation Society will be the international evaluation event of the year. It will be held in Helsinki, Finland during 3-5 October 2012 (pre-conference workshops 1- 2 October).

Evaluators are living in times of unprecedented challenge and opportunity. The networked information environment is inducing fundamental changes in culture, politics and society. Whereas the industrial society was reliant on centralised, hierarchical, high cost information systems, the networked society is characterised by decentralised, voluntary and cheap information exchange.

The advent of social networking without borders will have fundamental implications for evaluation agendas and methods. First, it will redefine the value and legitimacy of evaluation in global social accountability networks and accelerate the internationalisation of evaluation. Second, evaluation cultures, structures and processes will have to deal  with the limitless quantity, speed and accessibility of information generated by new technologies, e.g. drawing useful meaning from huge data bases, assessing the validity of an exploding number of rating systems, league tables, etc. in ways consistent with democratic values of freedom of expression and protection of privacy.

The new information technologies offer new ways of making authority responsible and accountable as well as bringing real time citizen involvement and reliable information to bear on public policy making. What are the implications of an information economy that allows instant connectivity to thousands of program beneficiaries suddenly able to make their voices heard? Will the spread of mobile telephony to the weakest and most vulnerable members of society and the rising power of social networks act as evaluative and recuperative mechanisms or will they merely aggravate social instability? What are the risks of network capture by single or special interest groups and cooptation of evaluation?

The rise of the evaluation discipline is inextricably linked to the values central to any democratic society. How will these values be protected in a context where weak links and increasing inequalities have created new fissures in society? How will evaluation independence be protected against the pressures of vested interests intent on retaining control over the commanding heights of the society?

To help explore these and other issues relevant to the prospects of evaluation in Europe and beyond the Conference will stimulate evaluators to share ideas, insights and opinions about a wide range of topics that will throw light on the future roles of evaluation in the networked society. The Conference will help draw evaluation lessons learnt in distinct sectors and regions of the world. It will also examine the potential of alternative and mixed evaluation methods in diverse contexts and probe the challenges of assessing public interest in complex adaptive systems and networks.

To these ends the Conference will offer participants a wide choice of vehicles for the transmission of evaluation experience and knowledge: keynote speeches, paper presentations, panel debates, posters, etc.  As in past years the EES Conference will aim at a pluralistic agenda that respects the legitimacy of different standpoints, illuminates diverse perspectives and promotes principled debate. The Conference will also provide an opportunity for evaluation networks to interact and improve the coherence of their activities.

We look forward to welcoming you in Helsinki. It is one of the world leaders in modern design and it provides Europe with a world class high tech platform. It also boasts a 450 year history and lays claim to being the warmest, friendliest, most “laid back” city of Northern Europe. Its nearby archipelago of islands offers an ideal environment for sea cruises and its neighboring old growth forests provide an idyllic setting for restful nature walks. We promise you an enjoyable as well as a professionally rewarding time!!

Ian Davies, President, European Evaluation Society
Maria Bustelo, Vice President and President Elect, European Evaluation Society

Is Australian Aid Fair Dinkum? A Forum On The Independent Review Of Aid Effectiveness

Venue: Old Parliament House, 18 King George Tce, Parkes 8222, Canberra
Date: Tuesday, 13 September 2011 6:00 PM

Summary

“In a world where we have achieved so much, from quantum leaps in medical research to the development of sophisticated technologies, it seems implausible that there are more hungry people in the world today than the populations of the United States, Canada and the European Union combined.

But the picture isn’t all bleak. A recent report released by the United Nations reveals that we have made some significant progress in our bid to alleviate poverty around the world, and the Independant Review of Aid Effectiveness commissioned by the Australian Government has made some assessments and recommendations that could help guide progress in the future.

However, when it comes to the complex issue of poverty alleviation, there are no simple answers.

What are some of the challenges faced when it comes to ensuring that we are taking the smartest and most efficient approach to tackling poverty? What are the timeframes within which we can realistically expect change to happen? And are we doing enough to address structural and behavioural issues that perpetuate gender inequality and other forms of exploitation that continue the vicious cycle of poverty.

How much of a difference are we actually making?”

Speakers include:

  • James Batley – Deputy Director-General, Asia Pacific and Program Enabling Group, AusAID
  • Stephen Howes – Director, Development Policy Centre, ANU and member of Independent Aid Effectiveness Review panel
  • Dr Julia Newton-Howes – Chief Executive, CARE AustraliaNikunj Soni – Board Chair, Pacific Institute of Public Policy, Vanuatu

Registration and other information here

Developing a Monitoring and Evaluation Framework: A list

[Aplogies: This page is still at the draft stage, there are some formating and other problems]

A suggested definition of a M&E Framework:

A document that tells you who is expected to know what, as well as when and how they are expected to know.

The list (under development, suggestions welcomed):

Conference on ICT for Monitoring & Evaluation

 

Date: 18 October 2011
Venue: New Delhi, India

[found courtesy of Sarah Earl]

“Information and Communication Technology (ICT) is ubiquitous in every aspect of our lives. Gradually ICT is acquiring a key role in monitoring and evaluation (M&E) of development projects by reinforcing the efficacy of data management and processing processes. As ICT is a whole new world for many professionals in the development sector, it becomes important to seek synergies from leading development actors and practitioners for perspective building, dissemination of knowledge and furthering the use of ICT in monitoring, evaluation and information system. How ICT influences on the collective activities and interest of the society in general has become an ineluctable question which could be well answered through the monitoring and evaluation process.”

“In this context a conference on ICT for monitoring and evaluation is being organized by Sambodhi Research & Communications on October 18, 2011 in New Delhi. The theme of the conference is “ICT for Monitoring, Evaluation, and Information System: Exploring New Frontiers.”

To participate in the conference click the following to download

Conference Flyer
Registration form

For more information on the conference
Contact : Dr. Mary/Ms. Padmavati
Email : contact@sambodhi.co.in
Tel : 011             47593300-99

Evaluating the Complex: Attribution, Contribution and Beyond.

Kim Forss, Mita Marra and Robert Schwartz, editors. Transaction Publishers, New Brunswick. May 2011. Available via Amazon

“Problem-solving by policy initiative has come to stay. Overarching policy intiatives are now standard modus operandi for governmental and non-governmental organisations. But complex policy initiatives are not only reserved for the big challenges of our times, but are used for matters such as school achievement, regional development, urban planning, public health and safety. As policy and the ensuing implementation tends to be more complex than simple project and programme management, the task of  evaluation has also become more complex.”

“The book begins with a theoretical and conceptual explanation of complexity and how that affects evaluation. The authors make the distinction between, on the hand, the common-sense understanding of complexity  as something that is generally messy, involves many actors and has unclear boundaries and overlapping roles; and on the hand, complexity as a specific term from systems sciences, which implies non-linear relationships between phenomena. It is particularly in the latter sense that an understanding of complexity has a bearing on evaluation design in respect of how evaluators approach the question of impact.”

“The book presents nine case studies that cover a wide variety of policy initiatives, in public health (smoking prevention), homelessness, child labour, regional development, international development cooperation, the HIV/AIDs pandemic, and international development cooperation. The use of case studies sheds light on the conceptual ideas at work in organisations addressing some of the world’s largest and most varied problems.”

“The evaluation processes described here commonly seek a balance between order and chaos. The interaction of four elements – simplicity, inventiveness, flexibility, and specificity – allows complex platterns to emerge. The case studies illustrate this framework and provide a number of examples of practical management of complexity in light of contingency theories of the evaluation process itself. These theories in turn match the complexity of the evaluated policies, strategies and programmes. The case studies do not pretend to illustrate perfect evaluation processes, the focus is on learning and on seeking patterns that have proved satisfactory and where the evaluation findings have been robust an trustworthy.”

“The contingency theory approach of the book underscores a point also made in the Foreword by Professor Elliot Stern: “In a world characterised by interdependence, emergent proerties, unpredictable change, and indeterminate outcomes, how could evaluation be immune?” The answer lies in the choice of methods as much as in the overall strategy and approach of  evaluation.”

%d bloggers like this: