Collective Impact

by John Kania and Mark Kramer, Standford Social Innovation Review, Winter 2011. Available online and as pdf

The same work has also been the subject of a New York Times article “Coming Together to Give Schools a Boost By DAVID BORNSTEIN March 7, 2011.  And further material is also available on the FSG website, a consultancy group involved in the process.

Excerpts:

“… Large-scale social change requires broad cross-sector coordination, yet the social sector remains focused on the isolated intervention of individual organizations

“The social sector  is filled with examples of partnerships, networks, and other types of  joint efforts. But collective impact initiatives are distinctly different. ”

“Shifting from isolated impact to collective impact is not merely a matter of encouraging more collaboration or public-private partnerships. It requires a systemic approach to social impact that focuses on the relationships between organizations and the progress toward shared objectives. And it requires the creation of a new set of nonprofit management organizations that have the skills and resources to assemble and coordinate the speciic elements necessary for collective action to succeed.”

“…Our research shows that successful collective impact initiatives typically have f ive conditions that together produce true alignment and lead to powerful results: a common agenda, shared measurement systems, mutually reinforcing activities, continuous communication, and backbone support organizations.”

Critical Study Of The Logical Framework Approach In The Basque Country

By ECODE, Bilbao, March 2011

Full text available in Spanish and in English

BACKGROUND AND PRESENTATION

“Since 1999 ECODE has been working on supporting the management of Development Cooperation interventions in the Basque Autonomous Community (BAC) by the multiple agents that are involved in this sector, including public administrations, Development NGOs and organisations in the South. During this time it has had the chance to strengthen the use of the Logical Framework Approach by all these entities, including its effects (positive and on occasions not so positive) on the planning and management  of the interventions based on it.

ECODE has also collaborated with the Basque Government’s Head Office of Development Cooperation for the development of its presentation and justification forms for its interventions, Projects, Programmes and Humanitarian Action for Development Cooperation carried out by Development NGOs. In addition, it has worked with a number of Basque entities supporting similar services. As a result, we have been able to see the multiple models there are for planning and formulating interventions, eminently based on the Logical Framework, and the consequences that this has for Development NGOs when applying for and justifying subsidies before various entities for their Development Cooperation interventions. Continue reading “Critical Study Of The Logical Framework Approach In The Basque Country”

Beyond the Millennium Development Goals (MDGs): Changing norms and meeting targets

Sussex Development Lecture, 10 March, by David Hulme

Text summary below is from IDS. A PowerPoint version of the same argument is also available from IDS.

[RD’s Comments can be found at the end of the text summary]

“David Hulme examined the impact of the MDGs, arguing that their replacement could not be driven by results alone. Building political will and public consensus is crucial if we are to effectively tackle world poverty.

The story so far

David set out a short history of the MDGs, articulating how their creation was brought about by a drive to strengthen the credibility of the UN and promote reform. The MDGs represented a blueprint designed to improve planning and financing, and a desire to demonstrate value for money. David explained how he thought that the impact of the MDGs was limited, with too greater a focus on results and targets.

Changing norms, ending poverty

David argued that we need to look beyond results and targets. We need to achieve a cultural shift that results in a strong belief amongst politicians and the public that ending extreme global poverty is a moral imperative.

The tipping point

David went on to explain that to change norms we rely on ‘norm entrepreneurs’ such as Jim Grant (PDF), Nafis Sadik (PDF) or Clare Short. They challenged some of the ingrained notions within international development in the late 1990s and were instrumental in changing the agenda to place greater emphasis on human development and gender equality.

Alongside norm entrepreneurs there are message entrepreneurs who play a crucial role in building the necessary consensus to formulate policy. Such message entrepreneurs include James Michel, John Ruggie and Mark Malloch Brown.

The supernorm

David argued that the impact of the MDGs has been limited due to their length and relative complexity. In a world overloaded with information, messages need to be simple and tangible if we want to build support and consensus around them. He suggested that the MDGs should be succeeded by the supernorm of extreme poverty is morally unacceptable. It represents a tangible concept which the majority can understand and are likely to support.

David concluded that whatever replaces the MDGs needs to move beyond economic growth as the overriding driver, with a greater focus placed upon changing norms.”

[RD’s comment] One way of changing norms would be to move the focus of the monitoring process, away from the change itself, and towards the agent responsible for its delivery i.e. principally governments. Years ago I did some work with the ILO re its International Programme for the Elimination of Child Labour, which received substantial funding from the US Department of Labor. I argued, unsuccessfully at the time, that the ILO should not be reporting on the number of children removed from child labour per se, but on the number of governments who had managed to reduce the incidence of child labour on some common yardstick. ILO is not responsible for the reduction in child labour, country governments are where the responsibility lays. ILO helps goverments, and others, so its metrics should focus on changes in governments’ behavior, expecially the changes in the incidence of child labour that those governments are able to (publicly and accurately) report. [This could be described as a kind of meta-indicator]

More recently DFID has been coming out with similar misplaced targets, possibly in order to meet a perceived public need for simple messages about aid. In the UK Aid Review they claim “To change lives and deliver results, we will:…Save the lives of 50,000 women in pregnancy and childbirth…Stop 250,000 newborn babies dying needlessly….etc”  This sort of message is enough to tear your hair out. It contradicts decades of investment in development education in the UK, and under-estimates the intelligence of the UK public, most of whom know that it is governance problems that are at the heart of many failures of countries to develop.

Other targets on the DFID list use the classic aid agency hedge of referring to “supporting” “helping” or “providing” something e.g. “Support 13 countries to hold freer and fairer elections” Taken literally, these are input measures of performance, easy to achieve and as such of limited value. A better target would be something more straightforward, along the lines of “13 countries will have freer and fairer elections (as defined by…), or even “government of 13 countries will ensure there are freer and fairer elections…” Yes, I do recognise that other parties as well governments have responsibilities here, but it is governments which frame those possibilities. Indicators couched in these actor-centred terms would also be useful in other ways, they would be much easier for other agencies to buy into, and collectively work towards.

Further information [on David Hulme’s presentation]

Hulme, D. (2010) Global Poverty: How Global Governance is Failing the Poor, (PDF) (London: Routledge)

Fukuda-Parr, S. and Hulme, D. (2011) International Norm Dynamics and the “End of Poverty”: Understanding the Millennium Development Goals’, (PDF) Journal of Global Governance, 17(1), pp. 17-36.

Hulme, D. and Scott, J. (2010) The Political Economy of the MDGs: Retrospect and Prospect for the World’s Biggest Promise’, New Political Economy, (PDF), New Political Economy 15(2), pp. 293-306.

USAID Evaluation Policy

14 pages. Available as pdf. Bureau for Policy, Planning, and Learning , January 19th, 2011

Contents: 1. Context; 2. Purposes of Evaluation; 3. Basic Organizational Roles and Responsibilities; 4. Evaluation Practices; 5. Evaluation Requirements; 6. Conclusion. Annex: Criteria to Ensure the Quality of the Evaluation Report

The future of UK aid: Changing lives, delivering results: our plans to help the world’s poorest people

The results of two DFID reviews made public on 1st March 2011, and available on the DFID website

See also:

Participatory Impact Assessment: A guide for practitioners

Andrew Catley – John Burns – Dawit Abebe – Omeno Suji, Feintein International Centre, Tufts University, 2008. Available as pdf

“Purpose of this guide

The Feinstein International Center has been developing and adapting participatory approaches to measure the impact of livelihoods based interventions since the early nineties. Drawing upon this experience, this guide aims to provide practitioners with a broad framework for carrying out project level Participatory Impact Assessments (PIA) of livelihoods interventions in the humanitarian sector. Other than in some health, nutrition, and water interventions in which indicators of project performance should relate to international standards, for many interventions there are no ‘gold standards’ for measuring project impact. For example, the Sphere handbook has no clear standards for food security or livelihoods interventions. This guide aims to bridge this gap by outlining a tried and tested approach to measuring the impact of livelihoods projects. The guide does not attempt to provide a set of standards or indicators or blueprint for impact assessment, but a broad and flexible framework which can be adapted to different contexts and project interventions.

Consistent with this, the proposed framework does not aim to provide a rigid or detailed step by step formula, or set of tools to carry out project impact assessments, but describes an eight stage approach, and presents examples of tools which may be adapted to different contexts. One of the  objectives of the guide is to demonstrate how PIA can be used to overcome some of the inherent weaknesses in conventional humanitarian monitoring evaluation and impact assessment approaches, such as; the emphasis on measuring process as opposed to real impact, the emphasis on external as opposed to community based indicators of impact, and how to overcome the issue of weak or non-existent baselines. The guide also aims to demonstrate and provide examples of how participatory methods can be used to overcome the challenge of attributing impact or change to actual project activities. The guide will also demonstrate how data collected from the systematic use of participatory tools can be presented numerically, and can give representative results and provide evidence based data on project impact.

Objectives of the Guide

1. Provide a framework for assessing the impact of livelihoods interventions

2. Clarify the differences between measuring process and real impact

3. Demonstrate how PIA can be used to measure the impact of different projects in different contexts using community identified impact indicators

4. Demonstrate how participatory methods can be used to measure impact where no baseline data exists

5. Demonstrate how participatory methods can be used to attribute impact to a project

6. Demonstrate how qualitative data from participatory tools can be systematically”

Five challenges facing impact evaluation

PS 2018 02 23: The original NONIE Meeting 2001 website is no longer in existence. Use this reference, if needed: White, H. (2011) ‘Five challenges facing impact evaluation on NONIE’ (http://nonie2011.org/?q=content/post-2).

“There has been enormous progress in impact evaluation of development interventions in the last five years. The 2006 CGD report When Will be Ever Learn? claimed that there was little rigorous evidence of what works in development. But there has been a huge surge in studies since then. By our count, there are over 800 completed and on-going impact evaluations of socio-economic development interventions in low and middle-income countries.

But this increase in numbers is just the start of the process of ‘improving lives through impact evaluation’, which was the sub-title of the CGD report and has become 3ie’s vision statement. Here are five major challenges facing the impact evaluation community:

1. Identify and strengthen processes to ensure that evidence is used in policy: studies are not an end in themselves, but a means to the end of better policy, programs and projects, and so better lives. At 3ie we are starting to document cases in which impact evaluations have, and have not, influenced policy to better understand how to go about this. DFID now requires evidence to be provided to justify providing support to new programs, an example which could be followed by other agencies.

2. Institutionalize impact evaluation: the development community is very prone to faddism. Impact evaluation could go the way of other fads and fall into disfavour. We need to demonstrate the usefulness of impact evaluation to help prevent this happening , hence my first point. But we also need take steps to institutionalize the use of evidence in governments and development agencies. This step includes ensuring that ‘results’ are measured by impact, not outcome monitoring.

3. Improve evaluation designs to answer policy-relevant questions: quality impact evaluations embed the counterfactual analysis of attribution in a broader analysis of the causal chain, allowing an understanding of why interventions work, or not, and yielding policy relevant messages for better design and implementation. There have been steps in this direction, but researchers need better understanding of the approach and to genuinely embrace mixed methods in a meaningful way.

4. Make progress with small n impact evaluations: we all accept that we should be issues-led not methods led, and use the most appropriate method for the evaluation questions at hand. But the fact is that there is far more consensus for the evaluation of large n interventions, in which experimental and quasi-experimental approaches can be used, then there is about the approach to be used for small n interventions. If the call to base development spending on evidence of what works is to be heeded, then the development evaluation community needs to move to consensus on this point.

5. Expand knowledge and use of systematic reviews: single impact studies will also be subject to criticisms of weak external validity. Systematic reviews, which draw together evidence from all quality impact studies of a particular intervention in a rigorous manner, give stronger, more reliable, messages. There has been an escalation in the production of systematic reviews in development in the last year. The challenge is to ensure that these studies are policy relevant and used by policy makers.”

Eight lessons from three years working on transparency

Blog posting by Owen Barder
February 22nd, 2011

“I’ve spent the last three years working on aid transparency. As I’m moving on to a very exciting new role (watch this space for more details) this seems a good time to reflect on what I’ve learned in the last three years.

This is a self-indulgently long essay about the importance of aid transparency, and the priorities for how it should be achieved. Busy readers can just read the 8-point summary below. For a very clear and concise introduction to the importance of aid transparency, this video by my (former) colleagues at aidinfo is very good.

I’m going to talk in a separate post about the exciting progress that has been made towards a new system of aid transparency, which I believe builds on many of these lessons, and on the next steps for the transparency movement more generally.

The 8-point summary

There is apparently a law that every document in development must have an “Executive Summary”. (Not just a “summary”, mind. It has to be for executives.) So here are what I think are the eight most important things I’ve learned in the last three years about transparency in general, and aid transparency in particular:

1. To make a difference, transparency has to be citizen-centred not donor-centred. A citizen-centred transparency mechanism would allow citizens of developing countries to combine and use information from many different donor agencies; and provide aid information compatible with the classifications of their own country budget.

2. Today’s ways of publishing information serve the needs of the powerful, not citizens. Existing mechanisms for publishing aid information were designed by the powerful for the powerful. Until the aidinfo team started 3 years ago, nobody had ever done a systematic study of the information needs of all stakeholders, including citizens, parliamentarians and civil society, let alone thought about how those needs could be met. That’s why current systems meet only the needs of donors, and powerful parts of governments.

3. People in developing countries want transparency of execution not just allocation. There are important differences between the information requirements of people in donor countries and people in developing countries. Current systems for aid transparency focus mainly on transparency of aid allocation, because that is what donor country stakeholders are largely interested in, and not enough on transparency of spending execution, which is of primary interest to people in developing countries.

4. Show, don’t tell. The citizens of donor nations are increasingly sceptical of annual reports and press releases. In aid as in other public services they want to be able to see for themselves the detail of how their money is being used and what difference it is making. They increasingly expect to be actively involved in decisions, and they are less willing to delegate the decisions entirely to experts. Donor agencies – whether government agencies, international organisations or NGOs – will have to adapt rapidly to become platforms for citizen engagement.

5. Transparency of aid execution will drive out waste, bureaucracy and corruption. There is, unfortunately, quite a bit of waste, bureaucracy and corruption in the aid system. There is good evidence that this kind of waste is rapidly reduced when the flow of money is made transparent. Corruption and waste prosper in dark places. Transparency of planned future aid spending will also help to increase spending efficiency and value for money.

6. Social accountability could be Development 3.0. The results agenda in aid agencies is currently too top down and pays too little attention to the power of bottom up information from the intended beneficiaries of aid. Increased accountability to citizens may be the key to unlocking better service delivery, improved governance and faster development.

7. The burden of proof should be on those who advocate secrecy. We have published a compelling business case for greater transparency, with all the uncertainties this kind of analysis entails. So where is the business case for secrecy, which would be far harder to quantify or defend? Why is the (inevitable) uncertainty in this kind of analysis allowed to count against the case for transparency, when the same uncertainty would deal a much greater blow against the case for secrecy?

8. Give citizens of developing countries the benefit of the doubt. Transparency is necessary but not sufficient for more effective aid. But the fact that transparency alone will not solve every problem should not be an excuse for aid agencies to shirk their responsibilities to be transparent. Nor should we be too attentive to vested interests in the aid industry telling us that transparency is not enough. Citizens of developing countries will be more innovative and effective than some people give them credit for when we give the information they need to hold the powerful to account.

That’s the summary. If any of that whets your appetite and you want the long version, read on.”

Learners, practitioners and teachers Handbook on monitoring, evaluating and managing knowledge for policy inluence

Authors: Vanesa Weyrauch, Julia D´Agostino, Clara Richards
Date Published: 11 February 2011 By CIPPEC. Available as pdf

Description: The evidence based policy influence is a topic of growing interest to researchers, social organizations, experts, government officials, policy research institutes and universities. However, they all admit that the path from the production of a piece or body of research until a public policy is sinuous, fuzzy, forked. In this context, it is not surprising that the practice of monitoring and evaluation (M&E) of the policy influence in Latin America is limited. And, indeed, a limited development of knowledge management (KM) on the experiences of advocacy organizations in the region is also observed. Incorporate monitoring, evaluating, and managing of knowledge between the daily practices of policy research institutes is well worth it. On the one hand, the use of these tools can be a smart strategy to enhance the impact of their research in public policy. On the other hand, can help them strengthen their reputation and visibility attracting more and better support by donors. In turn, the design of a system of M&E and the beginning of a KM culture, if approached with a genuine interest in learning, can become a valuable knowledge that bridges motivation for members of the organization. In short, these practices can improve targeting activities, better decide where and how to invest resources, and formulate more realistic and accurate strategic plans. With the publication of this handbook CIPPEC aims to support organizations that can monitor and evaluate their interventions and to develop systematic strategies for knowledge management. It includes stories of previous experiences in these fields in the region of Latin America, reflections on the most common challenges and opportunities and concrete working tools. These contributions aim to pave the way for the influence of public policy research in the region.

Using stories to increase sales at Pfizer

by Nigel Edwards, Strategic Communications Management Vol. 15, Issue 2, Feb-March 2011. pages 30-33. Available from Cognitive Edge website, and found via a tweet by David Snowden

[RD comment| This article is about the collation, analysis and use of a large volume of qualitative data, and as such has relevance to aid organisations as well as companies. It talks about the integrated use of two sets of methods:  anecdote circles as used by a  consultancy Narrate, and SenseMaker software as used by CognitiveEdge. While there is no mention of other story based methods, such as Most Significant Change(MSC), there are some connections. There are also connections with issues I have raised in the PAQI page on this website, which is all about the visualisation of qualitative data. I will explain.

The core of the Pfizer process was the collection of stories from a salesforce in 11 cities in six countries, within a two week period. With a further two weeks to analyse and report back the results.  Before then, the organisers identified a number of “signifiers” which could be applied to the stories. I would describe these as tags or categories that could be applied to the stories, between one and four words long, to signal what they were all about. These signifiers were developed as sets of choices offered in the form of polarities and triads. For example, one triad was “achieving the best vs respecting vs people, making a difference”. A polarity was “worried vs excited”. In previous work by Cognitive Edge and LearningbyDesign in Kenya the choice of which signifiers to apply to a story was in the hands of the story-teller, hence Cognitive Edge’s use of the phrase self-signifiers. What appeared to be new in the Pfizer application was that as each story was told by a member of an anecdote circle it was not only self-signified by the story teller, but also by the other members of the same group. So, for the 200 stories collected from 94 sales representatives they had 1,700 perspectives on those stories (so presumably about 8.5 people per group gave their choice of signifiers to each of the stories from that group).

I should back track at this stage. Self-signifiers are useful for two reasons. Firstly, because they are a way by which the respondent can provide extra information, in effect, meta-data, about what they have said in the story. Secondly, when stories can be given signifiers by multiple respondents from a commonly available set this allows clusters of stories to be self-created (i.e. being those which share the same sets of signifiers) and potentially identified. This is in contrast to external researchers reading the stories themselves, and doing their own tagging and sorting, using NVIVO or other means. The risk with this second approach is that the researcher prematurely imposes their own views on the data, before the data can “speak for themselves”. The self-signifying approach  is a more participatory and bottom up process, notwithstanding the fact that the set of signifiers being used may have been identified by the researchers in the first instance. PS: The more self signifiers there are to choose from, the more possible it will be that the participants can find a specific combination of signifiers which best fits their view of their story. From my reading there were at least 18 signifiers available to be used, possibly more.

The connection to MSC: MSC is about the participatory collection, discussion and selection of stories of significant change. Not only are people asked to describe what they think has been the most significant change, but they are also asked to explain why they think so. And when groups of MSC stories are pooled and discussed, with a view to participants selecting the most significant change from amongst all these, the participants are asked to explain and separately document why they selected the selected story. This is a process of self-signification. In some applications of MSC participants are also asked to place the stories they have discussed into one or another categories (called domains), which have in most cases been pre-identified by the organisers. This is another form of self-signifying. These two methods have advantages and disadvantages compared to the Pfizer approach.  One limitation I have noticed with the explanations of story choices is that while such discussions around reasons for choosing one story versus another can be very animated and in-depth, the subsequent documentation of the reasons is often very skimpy. Using a signifier tag or category description would be easier and might deliver more usable meta-data – even if participants themselves did not generate those signifiers. My concern, not substantiated, is that the task of assigning the signifiers might derail or diminish the discussion around story selection, which is so central to the MSC process.

Back to Pfizer. After the stories are collected along with their signifiers, the next step described in the Edwards paper is “looking at the overall patterns that emerged”. The text then goes on to describe the various findings and conclusions that were drawn, and how they were acted upon. This sequence reminds me of the cartoon, which has a long complex mathematical formula on a blackboard, with a bit of text in the middle of it all which says “then a miracle happens”. Remember, there were 200  stories with multiple signifiers applied to each story, by about 8 participants. That is 1700 different perspectives. That is a lot of data to look through and make sense of. Within this set I would expect to find many and varied clusters of stories that shared common sets of two or more signifiers. There are two ways of searching for these clusters. One is by intentional search, .i.e. by searching for stories that were given both signifier x and signifier y, because they were of specific interest to Pfizer. This requires some prior theory, hypotheses or hunch to guide it, otherwise it would be random search. A random search could take a very long time to find major clusters of stories, because the possibility space is absolutely huge. It doubles with every additional signifier (2,4,8,16…) and there multiple combinations of these signifiers because 8 participants are applying the signifiers (256 combinations of any combination of signifiers) to any one story. Intentional search is fine, but we will only find what we are looking for.

The other approach is to use tools which automatically visualise the clusters of stories that exist. One of the tools CognitiveEdge use for this purpose (and it is also used during data collection) are triangles that feature three different signifiers in each corner (the triads above). Each story will appear as a point within the triangle, representing the particular combinations of three attributes the story teller felt applied to the story. When multiple stories are plotted within the triangle multiple clusters of stories commonly appear, and they can then be investigated. The limitation of this tool is that it only visualises clusters of three signifiers at a time, when in practice 18 or more were used in the Pfizer case. It is still going to be slow way to search the space of all possible clusters of stories.

There is another approach, which I have discussed with David Snowden. This involves viewing stories as being connected to each other in a network, by virtue of sharing two or more signifiers. Data consisting of a list of stories with associated signifiers can be relatively easily imported from Excel into Social Network Analysis software, such as Ucinet/NetDraw, and then visualised as a network. Links can be size coded to show the relative number of signifiers any two connected stories share. More importantly, a filter can then be applied to automatically show only those stories connected by  x or more shared signifiers. This is a much less labor intensive way of searching huge possibility spaces.  My assumption is that clusters of stories sharing many signifiers are likely to be more meaningful than those sharing less, because they are less likely to occur simply by random chance.  And perhaps… that smaller clusters sharing many signifiers may be more meaningful than larger clusters sharing many signifiers (where the signifier might be fuzzier and less specific in meaning). These assumptions could be tested.

To recapitulate: Being able to efficiently explore large possibility spaces is important because they arise from giving participants more rather than less choice of signifiers. Giving more choice means we are more likely to hear the participants’ particular views, even though they are voiced through our constructs (the signifiers). And larger number of signifiers means that any clusters of highly connected stories is more likely to be meaningful rather than random.

Social Network Analysis software has an additional relevance for the analysis of Pfizer data set. Within the 1700 different perspectives on the stories there will not only be a network of stories connected by shared signifiers. There will also be a network of participants, connected by their shared similar uses of those signifiers. There will be clusters of participants as well as clusters of stories. This social dimension opened up by the participatory process used to apply the signifiers was not touched upon by the Dawson paper, probably because of limitations of time and space. But it could be great significance for Pfizer when working out how to best respond to the issues raised by the stories. Stories have owners, and different groups of owners will have different interests.

%d bloggers like this: