Learners, practitioners and teachers Handbook on monitoring, evaluating and managing knowledge for policy inluence

Authors: Vanesa Weyrauch, Julia D´Agostino, Clara Richards
Date Published: 11 February 2011 By CIPPEC. Available as pdf

Description: The evidence based policy influence is a topic of growing interest to researchers, social organizations, experts, government officials, policy research institutes and universities. However, they all admit that the path from the production of a piece or body of research until a public policy is sinuous, fuzzy, forked. In this context, it is not surprising that the practice of monitoring and evaluation (M&E) of the policy influence in Latin America is limited. And, indeed, a limited development of knowledge management (KM) on the experiences of advocacy organizations in the region is also observed. Incorporate monitoring, evaluating, and managing of knowledge between the daily practices of policy research institutes is well worth it. On the one hand, the use of these tools can be a smart strategy to enhance the impact of their research in public policy. On the other hand, can help them strengthen their reputation and visibility attracting more and better support by donors. In turn, the design of a system of M&E and the beginning of a KM culture, if approached with a genuine interest in learning, can become a valuable knowledge that bridges motivation for members of the organization. In short, these practices can improve targeting activities, better decide where and how to invest resources, and formulate more realistic and accurate strategic plans. With the publication of this handbook CIPPEC aims to support organizations that can monitor and evaluate their interventions and to develop systematic strategies for knowledge management. It includes stories of previous experiences in these fields in the region of Latin America, reflections on the most common challenges and opportunities and concrete working tools. These contributions aim to pave the way for the influence of public policy research in the region.

Using stories to increase sales at Pfizer

by Nigel Edwards, Strategic Communications Management Vol. 15, Issue 2, Feb-March 2011. pages 30-33. Available from Cognitive Edge website, and found via a tweet by David Snowden

[RD comment| This article is about the collation, analysis and use of a large volume of qualitative data, and as such has relevance to aid organisations as well as companies. It talks about the integrated use of two sets of methods:  anecdote circles as used by a  consultancy Narrate, and SenseMaker software as used by CognitiveEdge. While there is no mention of other story based methods, such as Most Significant Change(MSC), there are some connections. There are also connections with issues I have raised in the PAQI page on this website, which is all about the visualisation of qualitative data. I will explain.

The core of the Pfizer process was the collection of stories from a salesforce in 11 cities in six countries, within a two week period. With a further two weeks to analyse and report back the results.  Before then, the organisers identified a number of “signifiers” which could be applied to the stories. I would describe these as tags or categories that could be applied to the stories, between one and four words long, to signal what they were all about. These signifiers were developed as sets of choices offered in the form of polarities and triads. For example, one triad was “achieving the best vs respecting vs people, making a difference”. A polarity was “worried vs excited”. In previous work by Cognitive Edge and LearningbyDesign in Kenya the choice of which signifiers to apply to a story was in the hands of the story-teller, hence Cognitive Edge’s use of the phrase self-signifiers. What appeared to be new in the Pfizer application was that as each story was told by a member of an anecdote circle it was not only self-signified by the story teller, but also by the other members of the same group. So, for the 200 stories collected from 94 sales representatives they had 1,700 perspectives on those stories (so presumably about 8.5 people per group gave their choice of signifiers to each of the stories from that group).

I should back track at this stage. Self-signifiers are useful for two reasons. Firstly, because they are a way by which the respondent can provide extra information, in effect, meta-data, about what they have said in the story. Secondly, when stories can be given signifiers by multiple respondents from a commonly available set this allows clusters of stories to be self-created (i.e. being those which share the same sets of signifiers) and potentially identified. This is in contrast to external researchers reading the stories themselves, and doing their own tagging and sorting, using NVIVO or other means. The risk with this second approach is that the researcher prematurely imposes their own views on the data, before the data can “speak for themselves”. The self-signifying approach  is a more participatory and bottom up process, notwithstanding the fact that the set of signifiers being used may have been identified by the researchers in the first instance. PS: The more self signifiers there are to choose from, the more possible it will be that the participants can find a specific combination of signifiers which best fits their view of their story. From my reading there were at least 18 signifiers available to be used, possibly more.

The connection to MSC: MSC is about the participatory collection, discussion and selection of stories of significant change. Not only are people asked to describe what they think has been the most significant change, but they are also asked to explain why they think so. And when groups of MSC stories are pooled and discussed, with a view to participants selecting the most significant change from amongst all these, the participants are asked to explain and separately document why they selected the selected story. This is a process of self-signification. In some applications of MSC participants are also asked to place the stories they have discussed into one or another categories (called domains), which have in most cases been pre-identified by the organisers. This is another form of self-signifying. These two methods have advantages and disadvantages compared to the Pfizer approach.  One limitation I have noticed with the explanations of story choices is that while such discussions around reasons for choosing one story versus another can be very animated and in-depth, the subsequent documentation of the reasons is often very skimpy. Using a signifier tag or category description would be easier and might deliver more usable meta-data – even if participants themselves did not generate those signifiers. My concern, not substantiated, is that the task of assigning the signifiers might derail or diminish the discussion around story selection, which is so central to the MSC process.

Back to Pfizer. After the stories are collected along with their signifiers, the next step described in the Edwards paper is “looking at the overall patterns that emerged”. The text then goes on to describe the various findings and conclusions that were drawn, and how they were acted upon. This sequence reminds me of the cartoon, which has a long complex mathematical formula on a blackboard, with a bit of text in the middle of it all which says “then a miracle happens”. Remember, there were 200  stories with multiple signifiers applied to each story, by about 8 participants. That is 1700 different perspectives. That is a lot of data to look through and make sense of. Within this set I would expect to find many and varied clusters of stories that shared common sets of two or more signifiers. There are two ways of searching for these clusters. One is by intentional search, .i.e. by searching for stories that were given both signifier x and signifier y, because they were of specific interest to Pfizer. This requires some prior theory, hypotheses or hunch to guide it, otherwise it would be random search. A random search could take a very long time to find major clusters of stories, because the possibility space is absolutely huge. It doubles with every additional signifier (2,4,8,16…) and there multiple combinations of these signifiers because 8 participants are applying the signifiers (256 combinations of any combination of signifiers) to any one story. Intentional search is fine, but we will only find what we are looking for.

The other approach is to use tools which automatically visualise the clusters of stories that exist. One of the tools CognitiveEdge use for this purpose (and it is also used during data collection) are triangles that feature three different signifiers in each corner (the triads above). Each story will appear as a point within the triangle, representing the particular combinations of three attributes the story teller felt applied to the story. When multiple stories are plotted within the triangle multiple clusters of stories commonly appear, and they can then be investigated. The limitation of this tool is that it only visualises clusters of three signifiers at a time, when in practice 18 or more were used in the Pfizer case. It is still going to be slow way to search the space of all possible clusters of stories.

There is another approach, which I have discussed with David Snowden. This involves viewing stories as being connected to each other in a network, by virtue of sharing two or more signifiers. Data consisting of a list of stories with associated signifiers can be relatively easily imported from Excel into Social Network Analysis software, such as Ucinet/NetDraw, and then visualised as a network. Links can be size coded to show the relative number of signifiers any two connected stories share. More importantly, a filter can then be applied to automatically show only those stories connected by  x or more shared signifiers. This is a much less labor intensive way of searching huge possibility spaces.  My assumption is that clusters of stories sharing many signifiers are likely to be more meaningful than those sharing less, because they are less likely to occur simply by random chance.  And perhaps… that smaller clusters sharing many signifiers may be more meaningful than larger clusters sharing many signifiers (where the signifier might be fuzzier and less specific in meaning). These assumptions could be tested.

To recapitulate: Being able to efficiently explore large possibility spaces is important because they arise from giving participants more rather than less choice of signifiers. Giving more choice means we are more likely to hear the participants’ particular views, even though they are voiced through our constructs (the signifiers). And larger number of signifiers means that any clusters of highly connected stories is more likely to be meaningful rather than random.

Social Network Analysis software has an additional relevance for the analysis of Pfizer data set. Within the 1700 different perspectives on the stories there will not only be a network of stories connected by shared signifiers. There will also be a network of participants, connected by their shared similar uses of those signifiers. There will be clusters of participants as well as clusters of stories. This social dimension opened up by the participatory process used to apply the signifiers was not touched upon by the Dawson paper, probably because of limitations of time and space. But it could be great significance for Pfizer when working out how to best respond to the issues raised by the stories. Stories have owners, and different groups of owners will have different interests.

Nature Editorial: To ensure their results are reproducible, analysts should show their workings.

See Devil in the Details, Nature, Volume:470, Pages: 305–306 , 17 February 2011.

How many aid agencies could do the same, when their projects manage to deliver good results? Are there lessons to learned here?

Article text:

As analysis of huge data sets with computers becomes an integral tool of research, how should researchers document and report their use of software? This question was brought to the fore when the release of e-mails stolen from climate scientists at the University of East Anglia in Norwich, UK, generated a media fuss in 2009, and has been widely discussed, including in this journal. The issue lies at the heart of scientific endeavour: how detailed an information trail should researchers leave so that others can reproduce their findings?

The question is perhaps most pressing in the field of genomics and sequence analysis. As biologists process larger and more complex data sets and publish only the results, some argue that the reporting of how those data were analysed is often insufficient. Continue reading “Nature Editorial: To ensure their results are reproducible, analysts should show their workings.”

Social assessment of conservation initiatives: A review of rapid methodologies

Kate Schreckenberg, Izabel Camargo, Katahdin Withnall, Colleen Corrigan, Phil Franks, Dilys Roe, Lea M. Scherl and Vanessa Richardson.
Published: May 2010 – IIED, London, 124 pages

Summary

“Areas of land and sea are increasingly being marked out for protection in response to various demands: to tackle biodiversity loss, to prevent deforestation as a climate change mitigation strategy, and to restore declining fisheries. Amongst those promoting biodiversity conservation, the impacts of protected areas on resident or neighbouring communities have generated much debate, and this debate is raging further as new protection schemes emerge, such as REDD.

Despite widely voiced concerns about some of the negative implications of protected areas, and growing pressures to ensure that they fulfil social as well as ecological objectives, no standard methods exist to assess social impacts. This report aims to provide some.

Some 30 tools and methods for assessing social impacts in protected areas and elsewhere are reviewed in this report, with a view to understanding how different researchers have tackled the various challenges associated with impact assessment. This experience is used to inform a framework for a standardised process that can guide the design of locally appropriate assessment methodologies. Such a standard process would facilitate robust, objective comparisons between sites as well as assisting in the task of addressing genuine concerns and enhancing potential benefits.”

Available as pdf and as printed hard copy

Learning in Development

Olivier Serrat, Asian Development Bank, 2010

“Learning in Development tells the story of independent evaluation in ADB—from its early years to the expansion of activities under a broader mandate—points up the application of knowledge management to sense-making, and brings to light the contribution that knowledge audits can make to organizational learning. It identifies the 10 challenges that ADB must overcome to develop as a learning organization and specifies practicable next steps to conquer each. The messages of Learning in Development will echo outside ADB and appeal to the development community and people having interest in knowledge and learning.”

Contents

Joint Humanitarian Impact Evaluation: Report on consultations

Report for the Inter-Agency Working Group on Joint Humanitarian Impact
Evaluation
. Tony Beck  January 2011

” Background and purpose

Since the Tsunami Evaluation Coalition there have been ongoing discussions concerning mainstreaming joint impact evaluation within the humanitarian system. With pressure to demonstrate that results are being achieved by humanitarian action, the question has arisen as to whether and how evaluations can take place that will assess joint impact. An Inter-Agency Working Group was established in November 2009 to manage and facilitate consultations on the potential of Joint Humanitarian Impact Evaluation (JHIE). It was agreed to hold a series of consultations between February and November 2010 to define feasible approaches to joint impact evaluation in humanitarian action, which might subsequently be piloted in one to two humanitarian contexts.

Consultations were held with a representative cross section of humanitarian actors: the affected population in 15 communities in Sudan, Bangladesh and Haiti, and local government and local NGOs in the same countries; with national government and international humanitarian actors in Haiti and Bangladesh; and with 67 international humanitarian actors, donors, and evaluators in New York, Rome, Geneva, London and Washington. This is perhaps the most systematic attempt to consult with the affected population during the design phase of a major evaluative exercise. This report details the results from the consultations.”

A guide to monitoring and evaluating policy influence

ODI Background Notes, February 2011. 12 pages
Authors: Harry Jones
“This paper provides an overview of approaches to monitoring and evaluating policy influence and is intended as a guide, outlining challenges and approaches and suggested further reading.”

“Summary: Influencing policy is a central part of much international development work. Donor agencies, for example, must engage in policy dialogue if they channel funds through budget support, to try to ensure that their money is well-spent. Civil society organisations are moving from service delivery to advocacy in order to secure more sustainable, widespread change. And there is an increasing recognition that researchers need to engage with policy-makers if their work is to have wider public value.

Monitoring and evaluation (M&E), a central tool to manage interventions, improve practice and ensure accountability, is highly challenging in these contexts. Policy change is a highly complex process shaped by a multitude of interacting forces and actors. ‘Outright success’, in terms of achieving specific, hoped-for changes is rare, and the work that does influence policy is often unique and rarely repeated or replicated, with many incentives working against the sharing of ‘good practice’.

This paper provides an overview of approaches to monitoring and evaluating policy influence, based on an exploratory review of the literature and selected interviews with expert informants, as well as ongoing discussions and advisory projects for policy-makers and practitioners who also face the challenges of monitoring and evaluation. There are a number of lessons that can be learned, and tools that can be used, that provide workable solutions to these challenges. While there is a vast breadth of activities that aim to influence policy, and a great deal of variety in theory and practice according to each different area or type of organisation, there are also some clear similarities and common lessons.

Rather than providing a systematic review of practice, this paper is intended as a guide to the topic, outlining different challenges and approaches, with some suggestions for further reading.”

UK Independent Commission for Aid Impact (ICAI) – online consultation

ICAI website text:

“The Independent Commission for Aid Impact (ICAI) is the independent body responsible for the scrutiny of UK aid, focusing on delivery of value for money for the UK taxpayer, maximising the impact for recipients and ensuring effectiveness of the UK aid budget. ICAI reports to Parliament through the International Development Select Committee.

ICAI is currently running a consultation calling for members of the public to have their say on which areas of UK overseas aid they would like to see looked at. Responses to the consultation will assist ICAI to develop its work plan for the next three years. To respond to the consultation please visit www.independent.gov.uk/icai/consultation.” [where you will find an online survey  with suppporting background information on DFID]

“The deadline for the consultation is the 7th April 2011.

For enquiries about the ICAI consultation please contact Clare Robathan, Communications and Research Officer on 020 7023 6734, or c-robathan@icai.independent.gov.uk

RD comment:  Re the online survey used for the consultation, this is by no means the best designed online survey I have ever seen, but please make use of it. The survey is also available as a downloadable pdf.

The ICAI website has some basic problems. While there is a Contact Us page there is no comment facility at on any of the pages, as far as I can see. Nor is there a no disclosure/transparency policy. You can ask for the results of the survey via the enquiries email address, but they could be immediately available right now, because the website is using SurveyMonkey.com. Referring to the three newly appointed commissioners, the website says “The three Commissioners, Mark Foster, John Githongo and Diana Good are acknowledged leaders in their fields. Together they contribute a wealth of international experience in the private sector, in combating corruption and in development.” Yet, as far as I can see, none of the commisioners has any significant evaluation experience. Yet they are responsible for contracting an organisation (or group of organisations) to do evaluation work on behalf of the ICAI. In doing so they will need to secure value for money, which requires assessing both value as well as money spent.  I think we should watch the performance of this commission quite carefully.

PS 15th February 2011: Visitors may be interested to read the ICAI Terms of Reference 2010 for the evaluation functions being contracted out by the ICAI, and the supporting documentation, the Independent Commission for Aid Impact-Presentation-for-pre-bid-meeting, made by DFID on 22 November 2010

The Evaluation of Storytelling as a Peace-building Methodology

Experiential Learning Paper No. 5
January 2011

www.irishpeacecentres.org

This paper is the record of an international workshop which was held in Derry in September 2010 on the evaluation of storytelling as a peace-building methodology. This was an important and timely initiative because currently there is no generally agreed method of evaluating storytelling despite the significant sums of money invested in it, not least by the EU PEACE Programmes. It was in fact PEACE III funding that enabled this examination of the issue to take place. This support allowed us to match international experts in evaluation with experts in storytelling in a residential setting over two days. This mix proved incredibly rich and produced this report, which we believe is a substantial contribution to the field. It is an example of the reflective practice which is at the heart of IPC’s integrated approach to peace-building and INCORE’s focus on linking research with peace-building practice. Built on this and other initiatives, one of IPC’s specific aims is to create a series of papers that reflect the issues which are being dealt with by practitioners.

Contents:
Foreword 4
Introduction 5
Presentations, Interviews and Discussions 13
Final Plenary Discussion 52
Conclusions:
a. What we have learned about storytelling 65
b. What we have learned about the evaluation of storytelling 69
c. What next? 73
Appendix 1: Reflection Notes from Small
Discussion Groups 75
Appendix 2: How does storytelling work in violently divided societies? Questioning the link between storytelling and peace-building 112
Appendix 3: Workshop Programme 116
Appendix 4: Speaker Biographies 118
Appendix 5: Storytelling & Peace-building References and Resources 122

PS: Ken Bush has passed on this message:

Please find attached an updated copy of the Storytelling and Peacebuilding BIBLIOGRAPHY.  Inclusion of web addresses makes it particularly useful.

INTRAC, PSO & PRIA Monitoring and Evaluation Conference

Monitoring and evaluation: new developments and challenges
Date: 14-16 June 2011
Venue: The Netherlands

This international conference will examine key elements and challenges confronting the evaluation of international development, including its funding, practice and future.

The main themes of the conference will include: governance and accountability; impact; M&E in complex contexts of social change; the M&E of advocacy; M&E of capacity building; programme evaluation in an era of results-based management; M&E of humanitarian programmes; the design of M&E systems; evaluating networks, including community driven networks; changing theories of change and how this relates to M&E methods and approaches. Overview of conference

Call for M&E Case Studies

Case study abstracts (max. 500 words) are invited that relate to the conference themes above, with an emphasis on what has been done in practice. We will offer a competition for the best three cases and the authors will be invited early to the UK to work on their presentation for a plenary session. We will also identify a range of contributions for publication in Development in Practice.
Download the full case study guidelines, and submit your abstracts via email to Zoe Wilkinson.

Case studies abstracts deadline: 11 March 2011

%d bloggers like this: