Nature Editorial: To ensure their results are reproducible, analysts should show their workings.

See Devil in the Details, Nature, Volume:470, Pages: 305–306 , 17 February 2011.

How many aid agencies could do the same, when their projects manage to deliver good results? Are there lessons to learned here?

Article text:

As analysis of huge data sets with computers becomes an integral tool of research, how should researchers document and report their use of software? This question was brought to the fore when the release of e-mails stolen from climate scientists at the University of East Anglia in Norwich, UK, generated a media fuss in 2009, and has been widely discussed, including in this journal. The issue lies at the heart of scientific endeavour: how detailed an information trail should researchers leave so that others can reproduce their findings?

The question is perhaps most pressing in the field of genomics and sequence analysis. As biologists process larger and more complex data sets and publish only the results, some argue that the reporting of how those data were analysed is often insufficient. Continue reading “Nature Editorial: To ensure their results are reproducible, analysts should show their workings.”

Social assessment of conservation initiatives: A review of rapid methodologies

Kate Schreckenberg, Izabel Camargo, Katahdin Withnall, Colleen Corrigan, Phil Franks, Dilys Roe, Lea M. Scherl and Vanessa Richardson.
Published: May 2010 – IIED, London, 124 pages

Summary

“Areas of land and sea are increasingly being marked out for protection in response to various demands: to tackle biodiversity loss, to prevent deforestation as a climate change mitigation strategy, and to restore declining fisheries. Amongst those promoting biodiversity conservation, the impacts of protected areas on resident or neighbouring communities have generated much debate, and this debate is raging further as new protection schemes emerge, such as REDD.

Despite widely voiced concerns about some of the negative implications of protected areas, and growing pressures to ensure that they fulfil social as well as ecological objectives, no standard methods exist to assess social impacts. This report aims to provide some.

Some 30 tools and methods for assessing social impacts in protected areas and elsewhere are reviewed in this report, with a view to understanding how different researchers have tackled the various challenges associated with impact assessment. This experience is used to inform a framework for a standardised process that can guide the design of locally appropriate assessment methodologies. Such a standard process would facilitate robust, objective comparisons between sites as well as assisting in the task of addressing genuine concerns and enhancing potential benefits.”

Available as pdf and as printed hard copy

Learning in Development

Olivier Serrat, Asian Development Bank, 2010

“Learning in Development tells the story of independent evaluation in ADB—from its early years to the expansion of activities under a broader mandate—points up the application of knowledge management to sense-making, and brings to light the contribution that knowledge audits can make to organizational learning. It identifies the 10 challenges that ADB must overcome to develop as a learning organization and specifies practicable next steps to conquer each. The messages of Learning in Development will echo outside ADB and appeal to the development community and people having interest in knowledge and learning.”

Contents

Joint Humanitarian Impact Evaluation: Report on consultations

Report for the Inter-Agency Working Group on Joint Humanitarian Impact
Evaluation
. Tony Beck  January 2011

” Background and purpose

Since the Tsunami Evaluation Coalition there have been ongoing discussions concerning mainstreaming joint impact evaluation within the humanitarian system. With pressure to demonstrate that results are being achieved by humanitarian action, the question has arisen as to whether and how evaluations can take place that will assess joint impact. An Inter-Agency Working Group was established in November 2009 to manage and facilitate consultations on the potential of Joint Humanitarian Impact Evaluation (JHIE). It was agreed to hold a series of consultations between February and November 2010 to define feasible approaches to joint impact evaluation in humanitarian action, which might subsequently be piloted in one to two humanitarian contexts.

Consultations were held with a representative cross section of humanitarian actors: the affected population in 15 communities in Sudan, Bangladesh and Haiti, and local government and local NGOs in the same countries; with national government and international humanitarian actors in Haiti and Bangladesh; and with 67 international humanitarian actors, donors, and evaluators in New York, Rome, Geneva, London and Washington. This is perhaps the most systematic attempt to consult with the affected population during the design phase of a major evaluative exercise. This report details the results from the consultations.”

A guide to monitoring and evaluating policy influence

ODI Background Notes, February 2011. 12 pages
Authors: Harry Jones
“This paper provides an overview of approaches to monitoring and evaluating policy influence and is intended as a guide, outlining challenges and approaches and suggested further reading.”

“Summary: Influencing policy is a central part of much international development work. Donor agencies, for example, must engage in policy dialogue if they channel funds through budget support, to try to ensure that their money is well-spent. Civil society organisations are moving from service delivery to advocacy in order to secure more sustainable, widespread change. And there is an increasing recognition that researchers need to engage with policy-makers if their work is to have wider public value.

Monitoring and evaluation (M&E), a central tool to manage interventions, improve practice and ensure accountability, is highly challenging in these contexts. Policy change is a highly complex process shaped by a multitude of interacting forces and actors. ‘Outright success’, in terms of achieving specific, hoped-for changes is rare, and the work that does influence policy is often unique and rarely repeated or replicated, with many incentives working against the sharing of ‘good practice’.

This paper provides an overview of approaches to monitoring and evaluating policy influence, based on an exploratory review of the literature and selected interviews with expert informants, as well as ongoing discussions and advisory projects for policy-makers and practitioners who also face the challenges of monitoring and evaluation. There are a number of lessons that can be learned, and tools that can be used, that provide workable solutions to these challenges. While there is a vast breadth of activities that aim to influence policy, and a great deal of variety in theory and practice according to each different area or type of organisation, there are also some clear similarities and common lessons.

Rather than providing a systematic review of practice, this paper is intended as a guide to the topic, outlining different challenges and approaches, with some suggestions for further reading.”

The Evaluation of Storytelling as a Peace-building Methodology

Experiential Learning Paper No. 5
January 2011

www.irishpeacecentres.org

This paper is the record of an international workshop which was held in Derry in September 2010 on the evaluation of storytelling as a peace-building methodology. This was an important and timely initiative because currently there is no generally agreed method of evaluating storytelling despite the significant sums of money invested in it, not least by the EU PEACE Programmes. It was in fact PEACE III funding that enabled this examination of the issue to take place. This support allowed us to match international experts in evaluation with experts in storytelling in a residential setting over two days. This mix proved incredibly rich and produced this report, which we believe is a substantial contribution to the field. It is an example of the reflective practice which is at the heart of IPC’s integrated approach to peace-building and INCORE’s focus on linking research with peace-building practice. Built on this and other initiatives, one of IPC’s specific aims is to create a series of papers that reflect the issues which are being dealt with by practitioners.

Contents:
Foreword 4
Introduction 5
Presentations, Interviews and Discussions 13
Final Plenary Discussion 52
Conclusions:
a. What we have learned about storytelling 65
b. What we have learned about the evaluation of storytelling 69
c. What next? 73
Appendix 1: Reflection Notes from Small
Discussion Groups 75
Appendix 2: How does storytelling work in violently divided societies? Questioning the link between storytelling and peace-building 112
Appendix 3: Workshop Programme 116
Appendix 4: Speaker Biographies 118
Appendix 5: Storytelling & Peace-building References and Resources 122

PS: Ken Bush has passed on this message:

Please find attached an updated copy of the Storytelling and Peacebuilding BIBLIOGRAPHY.  Inclusion of web addresses makes it particularly useful.

INTRAC, PSO & PRIA Monitoring and Evaluation Conference

Monitoring and evaluation: new developments and challenges
Date: 14-16 June 2011
Venue: The Netherlands

This international conference will examine key elements and challenges confronting the evaluation of international development, including its funding, practice and future.

The main themes of the conference will include: governance and accountability; impact; M&E in complex contexts of social change; the M&E of advocacy; M&E of capacity building; programme evaluation in an era of results-based management; M&E of humanitarian programmes; the design of M&E systems; evaluating networks, including community driven networks; changing theories of change and how this relates to M&E methods and approaches. Overview of conference

Call for M&E Case Studies

Case study abstracts (max. 500 words) are invited that relate to the conference themes above, with an emphasis on what has been done in practice. We will offer a competition for the best three cases and the authors will be invited early to the UK to work on their presentation for a plenary session. We will also identify a range of contributions for publication in Development in Practice.
Download the full case study guidelines, and submit your abstracts via email to Zoe Wilkinson.

Case studies abstracts deadline: 11 March 2011

Policy Practice Brief 6 – What Makes A Good Governance Indicator?

January 2011, Gareth Williams

http://www.thepolicypractice.com/papersdetails.asp?code=17

The rise to prominence of good governance as a key development concern has been marked by an increasing interest in measurement and the production of a huge range of governance indicators. When used carefully such indicators provide a valuable source of information on governance conditions and trends. However, when used carelessly they can misinform and mislead. The purpose of this brief is to make sense of the different types of governance indicator and how they are used and misused. It warns against the commission of ‘seven deadly sins’ representing the most common pitfalls. The paper puts forward guidelines to ensure a more careful use and interpretation of governance indicators, and highlights the need for providers of indicators to be subject to greater transparency, scrutiny, evaluation and peer review. From the perspective of political economy analysis the challenge is to make the indicators more relevant to understanding the underlying political processes that are the key drivers of better governance.

2010 Annual Report on Results and Impact of IFAD Operations

“The IFAD Office of Evaluation (IOE) has released its eighth Annual Report on Results and Impact of IFAD Operations (ARRI), based on evaluations carried out in 2009. The main objectives of the report are to highlight the results and impact of IFAD-funded operations, and to draw attention to systemic issues and lessons learned with a view to further enhancing the Fund’s development effectiveness. The report synthesizes results from 17 projects evaluated in 2009 and draws upon four country programme evaluations in Argentina, India, Mozambique and Niger, as well as two corporate-level evaluations, namely on innovation and gender.

This year’s ARRI found that the performance of past IFAD-supported operations is, on the whole, moderately satisfactory. However, performance of operations has improved over time in a number of areas (e.g. sustainability, IFAD’s own performance as a partner, and innovation), even though additional improvements can be achieved in the future. The ARRI also found that projects designed more recently tend to perform better than older-generation operations, as overall objectives and design are more realistic and they devote greater attention to results management.

There are areas that remain a challenge, such as efficiency, and natural resources and environment.  The ARRI also notes that IFAD can do more to strengthen government capacity, which is one of the most important factors in achieving results on rural poverty. With regard to efficiency, the ARRI notes that the efficiency of IFAD-funded projects has improved from a low base in 2004/5, even though there is room for further enhancement. Efficiency will be studied by IOE in much deeper detail in 2011, within the framework of the planned corporate level evaluation on the topic.

The report is available on the IFAD website, at the following address: http://www.ifad.org/evaluation/arri/2010/arri.pdf

Hard copies of the report are available with IOE and can be requested via e-mail to evaluation@ifad.org.

Cordial regards,

IOE Evaluation Communication Unit

For further information, please contact:
Mark Keating
Evaluation Information Officer
Office of Evaluation
IFAD
Tel: ++39-06-54592048
e-mail: m.keating@ifad.org

“The Truth Wears Off” & “More Thoughts on the Decline Effect”

These two articles by Jonah Lehrer in the New Yorker (13/12/2010, 03/01/2011)  provide a salutary reminder of the limitations of experimental methods as a means of research, when undertaken by mere mortals, with their various limitations. They do not, in my reading, dismiss or invalidate the use of experimental methods, but they do highlight the need for a longer term view about the efficacy of experimental methods, and one which situates their use within a social context.

Many thanks to Irene Guijt for bringing these articles to my attention, and others, via the Pelican email list.

“In “The Truth Wears Off,” Lehrer “wanted to explore the human side of the scientific enterprise. My focus was on a troubling phenomenon often referred to as the “decline effect,” which is the tendency of many exciting scientific results to fade over time. This empirical hiccup afflicts fields from pharmacology to evolutionary biology to social psychology. There is no simple explanation for the decline effect, but the article explores several possibilities, from the publication biases of peer-reviewed journals to the “selective reporting” of scientists who sift through data”

In “More Thoughts on the Decline Effect” Lehrer responds to the various emails, tweets and comments posted in reply to his article. Amongts other things, he says “I think the decline effect is an important reminder that we shouldn’t simply reassure ourselves with platitudes about the rigors of replication or the inevitable corrections of peer review. Although we often pretend that experiments settle the truth for us—that we are mere passive observers, dutifully recording the facts—the reality of science is a lot messier. It is an intensely human process, shaped by all of our usual talents, tendencies, and flaws”

There are also many interesting Comments following both posts by Lehrer

%d bloggers like this: