Collective Impact

by John Kania and Mark Kramer, Standford Social Innovation Review, Winter 2011. Available online and as pdf

The same work has also been the subject of a New York Times article “Coming Together to Give Schools a Boost By DAVID BORNSTEIN March 7, 2011.  And further material is also available on the FSG website, a consultancy group involved in the process.

Excerpts:

“… Large-scale social change requires broad cross-sector coordination, yet the social sector remains focused on the isolated intervention of individual organizations

“The social sector  is filled with examples of partnerships, networks, and other types of  joint efforts. But collective impact initiatives are distinctly different. ”

“Shifting from isolated impact to collective impact is not merely a matter of encouraging more collaboration or public-private partnerships. It requires a systemic approach to social impact that focuses on the relationships between organizations and the progress toward shared objectives. And it requires the creation of a new set of nonprofit management organizations that have the skills and resources to assemble and coordinate the speciic elements necessary for collective action to succeed.”

“…Our research shows that successful collective impact initiatives typically have f ive conditions that together produce true alignment and lead to powerful results: a common agenda, shared measurement systems, mutually reinforcing activities, continuous communication, and backbone support organizations.”

SLEvaluationAssoc 2011 International Conference in Colombo

Date: 8-9 June
Venue: Colombo, Sri Lanka
The Sri Lanka Evaluation Association (SLEvA) will hold the SLEvA 2011 International Conference in Colombo, Sri Lanka on 8-9 June. The conference will be preceded by pre conference professional development workshops conducted by leading professionals in the field as well as professionals working to build the evaluation field in South Asia such as the Community of Evaluators, members of the Consortium of Academic Institutions for Teaching Evaluation in South Asia TESA and the International Organisation for Collaboration in Evaluation.

The conference is expected to bring together around 150 professionals in evaluation, academics and members of regional and global evaluation associations. The conference will be of interest to professionals and will provide an opportunity for sharing knowledge and ideas with professionals and practitioners in evaluation and to learn of initiatives in South Asia.

The overall theme of the conference will be ‘Evaluation for Policy and Action’ with the following as subthemes.

  • Evaluation for influencing policy and policy evaluation
  • Evaluation for supporting development programmes
  • Evaluation in disaster reduction and management
  • Evaluating networks and partnerships
  • Building the evaluation field
  • Evaluation methodologies and approaches
  • Other evaluation issues

The conference website is at www.sleva.lk

SLEvA invites paper abstracts, proposals for panel discussions, exhibits and displays, net working events and welcome your ideas for sharing information and learning

Critical Study Of The Logical Framework Approach In The Basque Country

By ECODE, Bilbao, March 2011

Full text available in Spanish and in English

BACKGROUND AND PRESENTATION

“Since 1999 ECODE has been working on supporting the management of Development Cooperation interventions in the Basque Autonomous Community (BAC) by the multiple agents that are involved in this sector, including public administrations, Development NGOs and organisations in the South. During this time it has had the chance to strengthen the use of the Logical Framework Approach by all these entities, including its effects (positive and on occasions not so positive) on the planning and management  of the interventions based on it.

ECODE has also collaborated with the Basque Government’s Head Office of Development Cooperation for the development of its presentation and justification forms for its interventions, Projects, Programmes and Humanitarian Action for Development Cooperation carried out by Development NGOs. In addition, it has worked with a number of Basque entities supporting similar services. As a result, we have been able to see the multiple models there are for planning and formulating interventions, eminently based on the Logical Framework, and the consequences that this has for Development NGOs when applying for and justifying subsidies before various entities for their Development Cooperation interventions. Continue reading “Critical Study Of The Logical Framework Approach In The Basque Country”

Beyond the Millennium Development Goals (MDGs): Changing norms and meeting targets

Sussex Development Lecture, 10 March, by David Hulme

Text summary below is from IDS. A PowerPoint version of the same argument is also available from IDS.

[RD’s Comments can be found at the end of the text summary]

“David Hulme examined the impact of the MDGs, arguing that their replacement could not be driven by results alone. Building political will and public consensus is crucial if we are to effectively tackle world poverty.

The story so far

David set out a short history of the MDGs, articulating how their creation was brought about by a drive to strengthen the credibility of the UN and promote reform. The MDGs represented a blueprint designed to improve planning and financing, and a desire to demonstrate value for money. David explained how he thought that the impact of the MDGs was limited, with too greater a focus on results and targets.

Changing norms, ending poverty

David argued that we need to look beyond results and targets. We need to achieve a cultural shift that results in a strong belief amongst politicians and the public that ending extreme global poverty is a moral imperative.

The tipping point

David went on to explain that to change norms we rely on ‘norm entrepreneurs’ such as Jim Grant (PDF), Nafis Sadik (PDF) or Clare Short. They challenged some of the ingrained notions within international development in the late 1990s and were instrumental in changing the agenda to place greater emphasis on human development and gender equality.

Alongside norm entrepreneurs there are message entrepreneurs who play a crucial role in building the necessary consensus to formulate policy. Such message entrepreneurs include James Michel, John Ruggie and Mark Malloch Brown.

The supernorm

David argued that the impact of the MDGs has been limited due to their length and relative complexity. In a world overloaded with information, messages need to be simple and tangible if we want to build support and consensus around them. He suggested that the MDGs should be succeeded by the supernorm of extreme poverty is morally unacceptable. It represents a tangible concept which the majority can understand and are likely to support.

David concluded that whatever replaces the MDGs needs to move beyond economic growth as the overriding driver, with a greater focus placed upon changing norms.”

[RD’s comment] One way of changing norms would be to move the focus of the monitoring process, away from the change itself, and towards the agent responsible for its delivery i.e. principally governments. Years ago I did some work with the ILO re its International Programme for the Elimination of Child Labour, which received substantial funding from the US Department of Labor. I argued, unsuccessfully at the time, that the ILO should not be reporting on the number of children removed from child labour per se, but on the number of governments who had managed to reduce the incidence of child labour on some common yardstick. ILO is not responsible for the reduction in child labour, country governments are where the responsibility lays. ILO helps goverments, and others, so its metrics should focus on changes in governments’ behavior, expecially the changes in the incidence of child labour that those governments are able to (publicly and accurately) report. [This could be described as a kind of meta-indicator]

More recently DFID has been coming out with similar misplaced targets, possibly in order to meet a perceived public need for simple messages about aid. In the UK Aid Review they claim “To change lives and deliver results, we will:…Save the lives of 50,000 women in pregnancy and childbirth…Stop 250,000 newborn babies dying needlessly….etc”  This sort of message is enough to tear your hair out. It contradicts decades of investment in development education in the UK, and under-estimates the intelligence of the UK public, most of whom know that it is governance problems that are at the heart of many failures of countries to develop.

Other targets on the DFID list use the classic aid agency hedge of referring to “supporting” “helping” or “providing” something e.g. “Support 13 countries to hold freer and fairer elections” Taken literally, these are input measures of performance, easy to achieve and as such of limited value. A better target would be something more straightforward, along the lines of “13 countries will have freer and fairer elections (as defined by…), or even “government of 13 countries will ensure there are freer and fairer elections…” Yes, I do recognise that other parties as well governments have responsibilities here, but it is governments which frame those possibilities. Indicators couched in these actor-centred terms would also be useful in other ways, they would be much easier for other agencies to buy into, and collectively work towards.

Further information [on David Hulme’s presentation]

Hulme, D. (2010) Global Poverty: How Global Governance is Failing the Poor, (PDF) (London: Routledge)

Fukuda-Parr, S. and Hulme, D. (2011) International Norm Dynamics and the “End of Poverty”: Understanding the Millennium Development Goals’, (PDF) Journal of Global Governance, 17(1), pp. 17-36.

Hulme, D. and Scott, J. (2010) The Political Economy of the MDGs: Retrospect and Prospect for the World’s Biggest Promise’, New Political Economy, (PDF), New Political Economy 15(2), pp. 293-306.

USAID Evaluation Policy

14 pages. Available as pdf. Bureau for Policy, Planning, and Learning , January 19th, 2011

Contents: 1. Context; 2. Purposes of Evaluation; 3. Basic Organizational Roles and Responsibilities; 4. Evaluation Practices; 5. Evaluation Requirements; 6. Conclusion. Annex: Criteria to Ensure the Quality of the Evaluation Report

Value for money: A list

Hopefully, the start of a short but useful bibliography, listed in chronological order.

Please suggest additional documents by using the Comment facility below.  If you have ideas on how Value for Money can be clearly defined and usefully measured please also use the Comment facility below..

For the Editor’s own suggestion, go to the bottom of this page

2015

2014

2013

2012

2011

  • ICAI’s Approach to Effectiveness and Value for Money, November 2011. See also Rick Davies comments on same
  • Value for Money and international development: Deconstructing some myths to promote more constructive discussion. OECD Consultation Draft. October 2011
  • What does ‘value for money’ really mean? CAFOD, October 2011
  • Value for Money: Guideline, NZAID, updated July 2011
  • DFID’s Approach to Value for Money (VfM), July 2011
  • DFID Briefing Note: Indicators and VFM in Governance Programming July 2011.  INTRODUCTION: This note provides advice to DFID staff on: i. governance indicator best practice, and ii. measuring the Value for Money of governance programmes. This note is for use primarily by DFID governance advisers, as well as other DFID staff designing programmes with governance elements. The note provides a framework for consideration in Business Case design that relates to governance activity.  On Value for Money (VFM) in particular, this guidance is only intended as ‘interim’ whilst further research is undertaken. During 2011-2012, DFID will work to determine best practice and establish agreed approaches and mechanisms. This guidance will therefore be updated accordingly subject to research findings as they are made available.  This note was drawn up by DFID staff. It builds on 2 research reports by ITAD, submitted in December 2010 and January 2011 respectively, as well as DFID’s internal Business Case guidance. There are 2 main sections: Section 1: Governance Indicators and Section 2: Value for Money in Governance Programming. The note ends with 10 Top Tips on Business Case preparation.
  • DFID is developing ” Guidance for DFID country offices on maximising VfM in cash transfer programmes“. July 2011. Objective:To provide guidance to DFID country offices on measuring value for money in cash transfer programmes through the rigorous analysis of costs and benefits, as far as possible, at the design stage and through programme implementation and completion.  This project is driven by DFID’s expansion of support to cash transfer programmes, its strong emphasis on ensuring programmes are delivering value for money, and strong country office demand for specific advice and guidance” (ToRs)
  • Value for Money: Current Approaches and Evolving Debates. Antinoja Emmi, Eskiocak Ozlem, Kjennerud Maja, Rozenkopf Ilan,  Schatz Florian, LSE, London, May 2011. 43 pages. “NGOs have increasingly been asked by donors to demonstrate their Value for Money (VfM).This report analyses this demand across a number of dimensions and intends to lay out the interpretation of different stakeholders. After contextualising the debate internationally and nationally, a conceptual discussion of possible ways of defining and measuring VfM is conducted, followed by a technical analysis of different approaches and measurement techniques adopted by stakeholders. Finally, opportunities and caveats of measuring VfM are discussed. The report draws heavily on information gained through a total of seventeen interviews with representatives of NGOs, consultancies, think tanks and academic institutions.”
  • Independent Commission for Aid Impact – Work Plan, May 2011: “We have not yet agreed our own definition of terms such as “value for money” and “aid effectiveness”. These are complex issues which are currently under much debate. In the case of value for money we believe that this should include long-term impact and effectiveness. We intend to commission our contractor to help us in our consideration of these matters.”
  • The Guardian, Madeleine Bunting,11th April 2011 “Value for money is not compatible with increasing aid to ‘fragile states’. The two big ideas from the UK’s Department for International Development are destined for collision”
  • NAO report on DFID Financial Management, April 2011. See the concluding section of the Executive Summary, titled Conclusion on value for money:
    • “We recognise that the Department has been improving its core financial management and has also been strengthening its focus on value for money at all levels of the organisation, including through a step change in its approach to the strategic allocation of resources based on expected results. Important building blocks have been put in place, but key gaps in financial management maturity remain. The changes the Department has introduced to-date are positive, and provide a platform to address the challenges that will come with its increased spending.”
    • At present, however, the Department’s financial management is not mature. The Department’s forecasting remains inaccurate and its risk management is not yet fully embedded. Weaknesses in the measurement of value for money at project level, variability in the quality and coverage of data, and lack of integration in core systems, mean that the Department cannot assess important aspects of value for money of the aid it has delivered, at an aggregated level. The Department now needs to develop a coherent single strategy to address the weaknesses identified and the key risks to meeting its objectives.”
  • DFID’s March 2011, Multilateral Aid Review, “was commissioned to assess the value for money for UK aid of funding through multilateral organisations”. “All were assessed against the same set of criteria, interpreted flexibly to fit with their different circumstances, but always grounded in the best available evidence. Together the criteria capture the value for money for UK aid of the whole of each organisation. The methodology was independently validated and quality assured by two of the UK’s leading development experts. The assessment framework included criteria which relate directly to the focus and impact of an organisation on the UK’s development and humanitarian objectives– such as whether or not they are playing a critical role in line with their mandate, what this means in terms of results achieved on the ground, their focus on girls and women, their ability to work in fragile states, their attention to climate change and environmental sustainability, and their focus on poor countries. These criteria were grouped together into an index called “Contribution to UK development objectives.  The framework also included criteria which relate to the organisations’ behaviours and values that will drive the very best performance – such as transparency, whether or not cost and value consciousness and ambition for results are driving forces in the organisation, whether there are sound management and accountability systems, whether the organisations work well in partnership with others and whether or not financial resource management systems and instruments help to maximise impact. These were grouped together into an index called “Organisational strengths”. Value for money for UK aid was assessed on the basis of performance against both indices. So, for example, organisations with a strong overall performance against both indices were judged to offer very good value for money for UK aid, while those with a weak or unsatisfactory performance against both indices were deemed to offer poor value for money.”
    • [RD comment] In the methodology chapter the authors explain / claim that this approach is based on a 3E view that seeks to give attention to the whole “value for money chain” (nee causal chain), from inputs to impacts (which is discussed below). Reading the rest of that chapter, I am not convinced, I think the connection is tenuous, and what exists here is a new interpretation of Value for Money that will not be widely used. That said, I dont envy the task the authors of this report were faced with.
    • [RD comment]The Bilateral Aid Review makes copious references to Value for Money, but there is no substantive discussion of what it means anywhere in the review. Annex D includes a proposal format which includes a section for providing  Value for Money information in 200 words. This includes the following fields, which are presumably explained elsewhere: Qualitative judgement of vfm, vfm metrics (including cost-benefit measures), Unit costs, Scalability, Comparators, Overall VfM RAG rating: red/amber/green.
  • Aid effectiveness and value for money aid: complementary or divergent agendas as we head towards HLF-4. (March 2011)  This ODI, ActionAid and UK Aid Network public event was called “to reflect on approaches dominating the debate in advance of the OECD’s 4th High Level Forum on Aid Effectiveness (HLF-4); explore the degree to which they represent complimentary or divergent agendas; and discuss how they might combine to help ensure that HLF-4 is a turning point in the future impact of aid.” The presentations of three of the four speakers are available on this site. Unfortunately DFID’s presentation, by Liz Ditchburn– Director, Value for Money, DFID, is not available.
  • BOND Value for Money event (3 February 2011). “Bond hosted a half day workshop to explore this issue in more depth. This was an opportunity to take stock of the debates on Value for Money in the sector, to hear from organisations that have trialled approaches to Value for Money and to learn more about DFID’s interpretation of Value for Money from both technical and policy perspectives.” Presentations were made by (and are available): Oxfam, VSO, WaterAid, HIV/AIDS Aliliance, and DFID (Jo Abbot, Deputy Head Civil Society Department). There was also a prior BOND event in January 2011 on Value for Money, and presentations are also available, including an undated National Audit Office Analytical framework for assessing Value for Money
    • [RD Comment]The DFID presentation on “Value for Money and Civil Society”  is notable in the ways that it seeks to discourage NGOs from over investing efforts to measure Value for Money, and its emphasises on the continuity of DFIDs approach to assessing CSO proposals. The explanation of Value for Money is brief, captured in two statements: “optimal use of resources to get desired outcomes” and “maximum benefit for the resources requested”. To me this reads as efficiency and cost-effectiveness.
  • The Independent Commission for Aid Impact (ICAI)’s January 2011online consultation contrasts Value for Money reviews with Evaluations, Reviews and Investigations, as follows.
    • Value for money reviews: judgements on whether value for money has been secured in the area under examination. Value for money reviews will focus on the use of resources for development interventions.
    • Evaluations: the systematic and objective assessment of an on-going or complete development intervention, its design, implementation and results. Evaluations will focus on the outcome of development interventions.
    • Reviews: assessments of the performance of an intervention, periodically or on an ad hoc basis. Reviews tend to look at operational aspects and focus on the effectiveness of the processes used for development interventions.
    • Investigations:a formal inquiry focusing on issues around fraud and corruption.
      • [RD comment] The ICAI seems to take a narrower view than the National Audit Office, focusing on economy and efficiency and leaving out effectiveness – which within its perspective would be covered by evaluations.

 

2010

  • Measuring the Impact and Value for Money of Governance & Conflict Programmes Final Report December 2010 by Chris Barnett, Julian Barr, Angela Christie,  Belinda Duff, and Shaun Hext. “The specific objective stated for our work on value for money (VFM) in the Terms of Reference was: “To set out how value for money can best be measured in governance and conflict programming, and whether the suggested indicators have a role in this or not”. This objective was taken to involve three core tasks: first, developing a value for money approach that applies to both the full spectrum of governance programmes, and those programmes undertaken in conflict-affected and failed or failing states; second, that the role of a set of suggested indicators should be explored and examined for their utility in this approach, and, further, that existing value for money frameworks (such as the National Audit Office’s use of the 3Es of ‘economy, efficiency and effectiveness’) should be incorporated, as outlined in the Terms of Reference.”
  • Value for Money: How are other donors approaching ‘value for money’ in their aid programming? Question and answer on the Governance and Social Development Resource Centre Help Desk, 17 September 2010.
  • Value for Money (VfM) in International Development NEF Consulting Discussion Paper, September 2010. Some selective quotes: “While the HM Treasury Guidance provides principles for VfM assessments, there is currently limited guidance on how to operationalise these in the international development sector or public sector more generally. This has led to confusion about how VfM assessments should be carried out and seen the proliferation of a number of different approaches.” …”The HM Treasury guidance should inform the VfM framework of any publicly-funded NGO in the development sector. The dark blue arrow in Figure 1 shows the key relationship that needs to be assessed to determine VfM. In short, this defines VfM as: VfM = value of positive + negative outcomes / investment (or cost)”
  • [RD Comment:] Well now, having that formula makes it so much easier (not), all we have to do is find the top values, add them up, then divide by the bottom value :-(
  • What is Value for Money? (July 2010) by the Improvement Network (Audit Commission, Chartered Institute of Public Finance and Accountancy (CIPFA), Improvement and Development Agency (IDeA), Leadership Centre for Local Government, NHS Institute for Innovation and Improvement).  “VfM is about achieving the right local balance between economy, efficiency and effectiveness, the 3Es – spending less, spending well and spending wisely” These three attributes are each related to different stages of aid delivery, from inputs to outcomes, via this diagram.
  • [RD comment]: Reading this useful page raises two interesting questions. Firstly, how does this framework relate to the OECD/DAC evaluation criteria? Is it displacing them, as far as DFID is concerned? It appears so, given its appearance in the Terms of Reference for the contractors who will do the evaluation work for the new Independent Commission for Aid Impact. Ironically, the Improvement Network makes the following comments about the third E, (effectiveness) which suggests that the DAC criteria may be re-emerging within this new framework: “Outcomes should be equitable across communities, so effectiveness measures should include aspects of equity, as well as quality. Sustainability is also an increasingly important aspect of effectiveness.” The second interesting question is how Value for Money is measured in aggregate, taking into account all three Es. Part of the challenge is with effectiveness, where it is noted that effectivenessis a measure of the impact that has been achieved, which can be either quantitative or qualitative.” Then there is the notion that Value for Money is about a “balance” of the three Es. “VfM is high when there is an optimum balance between all three elements – when costs are relatively low, productivity is high and successful outcomes have been achieved.” On the route to that heaven there are multiple possible combinations of states of economy (+,-), efficiency (+,-) and effectiveness (+,-). There is no one desired route or ranking. Because of these difficulties Sod’s Law will probably apply and attention will focus on what is easiest to measure i.e. economy or at the most, efficiency. This approach seems to be evident in earlier government statements about DFID: “International Development Minister Gareth Thomas yesterday called for a push on value for money in the UN system with a target of 25% efficiency savings.”….”The UK is holding to its aid commitments of 0.7 % of GNI.  But for the past five years we have been expected to cut 5% from our administration or staffing costs across Government. 5% – year on year”

 

2007

 

2003

 

The Editor’s suggestion

1. Dont seek to create an absolute measure of the Value for Money for a single activity/project/program/intervention

2. Instead, create a relative measure of  the VfM found within a portfolio of activities, by using a rank correlation. [This measure then be used to compare VfM across different types of portfolios]

  • 1. Rank the entities (activities/projects…) by cost of the inputs, and
    • Be transparent about which costs were included/excluded e.g partner’s own costs, other donor contributions etc,)
  • 2. Rank the the same set of entities by their perceived effectiveness or impact (depending on the time span of interest)
    • Ideally this ranking would be done through a participatory ranking process (see Refs below), and information would be available on the stakeholders who were involved
    • Where multiple stakeholder groups were consulted, any aggregation of their rankings would be done using transparent weighting values and information would also be available on the Standard Deviation of the rankings given to the different entities. There is likely to be more agreement across stakeholders on some rankings than others.
    • Supplementary information would be available detailing how stakeholders explained their ranking. This is best elicited through pair comparisons of  adjacent sets of ranked entities.
      • That explanation is likely to include a mix of:
        • some kinds of impacts being more valued by the stakeholders than others, and
        • for a given type of impact there being evidence of more rather than less of that kind of impact, and
        • where a given impact is on the same scale, there being better evidence of that impact
  • 3. Calculate the rank correlation between the two sets of rankings. The results will range between these two extremities:
    • A high positive correlation (e.g. +0.90): here the highest impact is associated with the highest cost ranking, and the lowest impact is associated with the lowest cost ranking. Results are proportionate to investments. This would be the more preferred finding, compared to
    • A high negative correlation (e.g -0.90): here the highest impact is associated with lowest cost ranking, but the lowest impact is associated with the highest cost ranking. Here the more you increase your investment the less you gain, This is the worst possible outcome.
    • In between will be correlations closer to zero, where there is no evident relationship between cost and impact ranking.
  • 4. Opportunities for improvement would be found by doing case studies of “outliers”, found when the two rankings are plotted against each other in a graph. Specifically:
    • Positive cases, whose rank position on cost is conspicuosly lower than their rank position on impact.
    • Negative cases, whose rank position on impact is conspicuosly lower than their rank position on cost.

PS: It would be important to  disclose the number of entities that have been ranked. The more entities there are being ranked the more precise the rank correlation will be. However, the more entities there are to rank the harder it will be for participants and the more likely they will use tied ranks. A minimum of seven rankable entities would seem desirable.

For more on participatory ranking methods see:

PS: There is a UNISTAT plugin for Excel that will produce rank correlations, plus much more.

The future of UK aid: Changing lives, delivering results: our plans to help the world’s poorest people

The results of two DFID reviews made public on 1st March 2011, and available on the DFID website

See also:

Participatory Impact Assessment: A guide for practitioners

Andrew Catley – John Burns – Dawit Abebe – Omeno Suji, Feintein International Centre, Tufts University, 2008. Available as pdf

“Purpose of this guide

The Feinstein International Center has been developing and adapting participatory approaches to measure the impact of livelihoods based interventions since the early nineties. Drawing upon this experience, this guide aims to provide practitioners with a broad framework for carrying out project level Participatory Impact Assessments (PIA) of livelihoods interventions in the humanitarian sector. Other than in some health, nutrition, and water interventions in which indicators of project performance should relate to international standards, for many interventions there are no ‘gold standards’ for measuring project impact. For example, the Sphere handbook has no clear standards for food security or livelihoods interventions. This guide aims to bridge this gap by outlining a tried and tested approach to measuring the impact of livelihoods projects. The guide does not attempt to provide a set of standards or indicators or blueprint for impact assessment, but a broad and flexible framework which can be adapted to different contexts and project interventions.

Consistent with this, the proposed framework does not aim to provide a rigid or detailed step by step formula, or set of tools to carry out project impact assessments, but describes an eight stage approach, and presents examples of tools which may be adapted to different contexts. One of the  objectives of the guide is to demonstrate how PIA can be used to overcome some of the inherent weaknesses in conventional humanitarian monitoring evaluation and impact assessment approaches, such as; the emphasis on measuring process as opposed to real impact, the emphasis on external as opposed to community based indicators of impact, and how to overcome the issue of weak or non-existent baselines. The guide also aims to demonstrate and provide examples of how participatory methods can be used to overcome the challenge of attributing impact or change to actual project activities. The guide will also demonstrate how data collected from the systematic use of participatory tools can be presented numerically, and can give representative results and provide evidence based data on project impact.

Objectives of the Guide

1. Provide a framework for assessing the impact of livelihoods interventions

2. Clarify the differences between measuring process and real impact

3. Demonstrate how PIA can be used to measure the impact of different projects in different contexts using community identified impact indicators

4. Demonstrate how participatory methods can be used to measure impact where no baseline data exists

5. Demonstrate how participatory methods can be used to attribute impact to a project

6. Demonstrate how qualitative data from participatory tools can be systematically”

Five challenges facing impact evaluation

PS 2018 02 23: The original NONIE Meeting 2001 website is no longer in existence. Use this reference, if needed: White, H. (2011) ‘Five challenges facing impact evaluation on NONIE’ (http://nonie2011.org/?q=content/post-2).

“There has been enormous progress in impact evaluation of development interventions in the last five years. The 2006 CGD report When Will be Ever Learn? claimed that there was little rigorous evidence of what works in development. But there has been a huge surge in studies since then. By our count, there are over 800 completed and on-going impact evaluations of socio-economic development interventions in low and middle-income countries.

But this increase in numbers is just the start of the process of ‘improving lives through impact evaluation’, which was the sub-title of the CGD report and has become 3ie’s vision statement. Here are five major challenges facing the impact evaluation community:

1. Identify and strengthen processes to ensure that evidence is used in policy: studies are not an end in themselves, but a means to the end of better policy, programs and projects, and so better lives. At 3ie we are starting to document cases in which impact evaluations have, and have not, influenced policy to better understand how to go about this. DFID now requires evidence to be provided to justify providing support to new programs, an example which could be followed by other agencies.

2. Institutionalize impact evaluation: the development community is very prone to faddism. Impact evaluation could go the way of other fads and fall into disfavour. We need to demonstrate the usefulness of impact evaluation to help prevent this happening , hence my first point. But we also need take steps to institutionalize the use of evidence in governments and development agencies. This step includes ensuring that ‘results’ are measured by impact, not outcome monitoring.

3. Improve evaluation designs to answer policy-relevant questions: quality impact evaluations embed the counterfactual analysis of attribution in a broader analysis of the causal chain, allowing an understanding of why interventions work, or not, and yielding policy relevant messages for better design and implementation. There have been steps in this direction, but researchers need better understanding of the approach and to genuinely embrace mixed methods in a meaningful way.

4. Make progress with small n impact evaluations: we all accept that we should be issues-led not methods led, and use the most appropriate method for the evaluation questions at hand. But the fact is that there is far more consensus for the evaluation of large n interventions, in which experimental and quasi-experimental approaches can be used, then there is about the approach to be used for small n interventions. If the call to base development spending on evidence of what works is to be heeded, then the development evaluation community needs to move to consensus on this point.

5. Expand knowledge and use of systematic reviews: single impact studies will also be subject to criticisms of weak external validity. Systematic reviews, which draw together evidence from all quality impact studies of a particular intervention in a rigorous manner, give stronger, more reliable, messages. There has been an escalation in the production of systematic reviews in development in the last year. The challenge is to ensure that these studies are policy relevant and used by policy makers.”

Eight lessons from three years working on transparency

Blog posting by Owen Barder
February 22nd, 2011

“I’ve spent the last three years working on aid transparency. As I’m moving on to a very exciting new role (watch this space for more details) this seems a good time to reflect on what I’ve learned in the last three years.

This is a self-indulgently long essay about the importance of aid transparency, and the priorities for how it should be achieved. Busy readers can just read the 8-point summary below. For a very clear and concise introduction to the importance of aid transparency, this video by my (former) colleagues at aidinfo is very good.

I’m going to talk in a separate post about the exciting progress that has been made towards a new system of aid transparency, which I believe builds on many of these lessons, and on the next steps for the transparency movement more generally.

The 8-point summary

There is apparently a law that every document in development must have an “Executive Summary”. (Not just a “summary”, mind. It has to be for executives.) So here are what I think are the eight most important things I’ve learned in the last three years about transparency in general, and aid transparency in particular:

1. To make a difference, transparency has to be citizen-centred not donor-centred. A citizen-centred transparency mechanism would allow citizens of developing countries to combine and use information from many different donor agencies; and provide aid information compatible with the classifications of their own country budget.

2. Today’s ways of publishing information serve the needs of the powerful, not citizens. Existing mechanisms for publishing aid information were designed by the powerful for the powerful. Until the aidinfo team started 3 years ago, nobody had ever done a systematic study of the information needs of all stakeholders, including citizens, parliamentarians and civil society, let alone thought about how those needs could be met. That’s why current systems meet only the needs of donors, and powerful parts of governments.

3. People in developing countries want transparency of execution not just allocation. There are important differences between the information requirements of people in donor countries and people in developing countries. Current systems for aid transparency focus mainly on transparency of aid allocation, because that is what donor country stakeholders are largely interested in, and not enough on transparency of spending execution, which is of primary interest to people in developing countries.

4. Show, don’t tell. The citizens of donor nations are increasingly sceptical of annual reports and press releases. In aid as in other public services they want to be able to see for themselves the detail of how their money is being used and what difference it is making. They increasingly expect to be actively involved in decisions, and they are less willing to delegate the decisions entirely to experts. Donor agencies – whether government agencies, international organisations or NGOs – will have to adapt rapidly to become platforms for citizen engagement.

5. Transparency of aid execution will drive out waste, bureaucracy and corruption. There is, unfortunately, quite a bit of waste, bureaucracy and corruption in the aid system. There is good evidence that this kind of waste is rapidly reduced when the flow of money is made transparent. Corruption and waste prosper in dark places. Transparency of planned future aid spending will also help to increase spending efficiency and value for money.

6. Social accountability could be Development 3.0. The results agenda in aid agencies is currently too top down and pays too little attention to the power of bottom up information from the intended beneficiaries of aid. Increased accountability to citizens may be the key to unlocking better service delivery, improved governance and faster development.

7. The burden of proof should be on those who advocate secrecy. We have published a compelling business case for greater transparency, with all the uncertainties this kind of analysis entails. So where is the business case for secrecy, which would be far harder to quantify or defend? Why is the (inevitable) uncertainty in this kind of analysis allowed to count against the case for transparency, when the same uncertainty would deal a much greater blow against the case for secrecy?

8. Give citizens of developing countries the benefit of the doubt. Transparency is necessary but not sufficient for more effective aid. But the fact that transparency alone will not solve every problem should not be an excuse for aid agencies to shirk their responsibilities to be transparent. Nor should we be too attentive to vested interests in the aid industry telling us that transparency is not enough. Citizens of developing countries will be more innovative and effective than some people give them credit for when we give the information they need to hold the powerful to account.

That’s the summary. If any of that whets your appetite and you want the long version, read on.”

%d bloggers like this: