Indices, Benchmarks, and Indicators: Planning and Evaluating Human Rights Dialogues

Anna Würth, Frauke Lisa Seidensticker, German Institute for Human Rights, 2005. Available as pdf

“In September 2001, the Swiss Government accepted a postulate by the Commission for Foreign Policy of the Federal Parliament. The postulate asked the government to develop the instrument of human rights dialogues within the human rights foreign policy. In 2004, the Federal Department of Foreign Affairs (DFA) issued an internal briefing paper for this policy. To further develop the instrument, the Human Rights Policy Section of the DFA asked the German Institute for Human Rights in early 2005 to elaborate a study on this comparably recent instrument of foreign policy with special attention to measurement of impact.

A commitment to the universal validity of human rights does not lead to a predetermined, uniform pattern of bilateral human rights policy towards all countries. A different approach is possible and necessary: Depending on the context of the respective country, the implementation of human rights concerns requires a set of instruments that follows different goals and strategies and sets different thematic priorities. In my opinion, this applies for human rights dialogues as well.

The present study elaborates the instrument of the institutionalized or formalized human rights dialogue. It focuses on the measurement of impact of human rights dialogues, an area that has not yet received sufficient attention. For states conducting human rights dialogues the study contains valuable recommendations for the planning, design, implementation and evaluation of future dialogues”

Contents
1  Impact of Human Rights Norms  . . . . . . . . . 1
2 Planning Human Rights Dialogues  . . . . . . . . 16
3 Measuring Impact: Forms and Methods  . . . . 21
4 The Practice of Impact Assessment  . . . . . . . . 27
5 Recommendations  . . . . . . . . . . . . . . . . . . . . 39

Helpdesk Research Report: Participatory M&E and Beneficiary Feedback

from the Governance and Social Development Resource Centre

Date: 03.09.2010 Available as a pdf

Query: Please identify the existing literature on participatory monitoring and evaluation, with a particular emphasis on gaining wide-ranging beneficiary feedback. Comment on the coverage, scalability, risks, benefits and applicability. Enquirer: Aid Effectiveness Team, DFID

Contents
1. Overview
2. General Literature on PM&E
3. Beneficiary Feedback
4. The Use of New Technologies in PM&E
5. Additional Information
Continue reading “Helpdesk Research Report: Participatory M&E and Beneficiary Feedback”

Aid Transparency Assessment

(from Karin Christiansen,Publish What You Fund )

“I am proud to share with you Publish What You Fund’s Aid Transparency Assessment that we have been working on over the last year. This is the first global assessment of the transparency of 30 major donors across seven indicators from eight data sources. The indicators cover donors’ commitment to aid transparency, transparency to recipient governments, and transparency to civil society.

The assessment is available on the new Publish What You Fund website. Explore the data yourself and see how donors perform.

Our first major finding highlights the necessity of donors building an international standard. The lack of comparable data meant we could not do the type of bottom up assessment we wished. However, the indicators developed from the limited data available provide an interesting comparison of current levels of donor transparency. We are planning to carry on with this work on an annual basis.

We hope there will be with more comprehensive, comparable and timely data to draw on in the future and would very much appreciate feedback, suggestions and thoughts on how to take this work forward.

The Assessment will be presented at the OECD DAC workshop on transparent development co-operation today, at the International Anti-Corruption Conference in Bangkok in November and at workshops in Washington in December.”

AusAID-DFID-3ie call for Systematic Reviews

The Australian Agency for International Development (AusAID), the UK’s Department for International Development (DFID) and the International Initiative for Impact Evaluation (3ie) have just launched a joint call for proposals for systematic reviews to strengthen the international community’s capacity for evidence-based policy making. AusAID, DFID and 3ie have identified around 59 priority systematic review questions across several themes: education; health; social protection and social inclusion; governance, fragile states, conflict and disasters; environment; infrastructure and technology; agriculture and rural development; economic development; and aid delivery and effectiveness.

Systematic reviews examine the existing evidence on a particular intervention or program in low and middle income countries, drawing also on evidence from developed countries when pertinent. The studies should be carried out according to recognized international standards and guidelines. All studies will be subject to an external review process and for this purpose teams will be encouraged to register for peer review with a relevant systematic review coordinating body.

Applications have to be submitted using 3ie’s online application system. Deadline for submission of applications is 9am GMT on Monday, November 29, 2010.

For information on how to apply, guidance documents and the call for proposals, go to http://www.3ieimpact.org/systematicreviews/3ie-ausaid-dfid.php

When is the rigorous impact evaluation of development projects a luxury, and when is it a necessity?

by Michael Clemens and Gabriel Demombynes, Centre for Global Development, 10/11/2010  Download (PDF, 733 KB)

“The authors study one high-profile case: the Millennium Villages Project (MVP), an experimental and intensive package intervention to spark sustained local economic development in rural Africa. They illustrate the benefits of rigorous impact evaluation in this setting by showing that estimates of the project’s effects depend heavily on the evaluation method.

Comparing trends at the MVP intervention sites in Kenya, Ghana, and Nigeria to trends in the surrounding areas yields much more modest estimates of the project’s effects than the before-versus-after comparisons published thus far by the MVP. Neither approach constitutes a rigorous impact evaluation of the MVP, which is impossible to perform due to weaknesses in the evaluation design of the project’s initial phase. These weaknesses include the subjective choice of intervention sites, the subjective choice of comparison sites, the lack of baseline data on comparison sites, the small sample size, and the short time horizon. We describe how the next wave of the intervention could be designed to allow proper evaluation of the MVP’s impact at little additional cost.”

See responses to this paper here:

The Katine Challenge: How to analyse 540+ stories about a rural development project

The Guardian & Barclays funded and AMREF implemented, Katine Community Partnerships Project in Soroti District, Uganda is exceptional in some respects and all too common in others.

It is exceptional in the degree to which its progress has been very publicly monitored since it began in October 2007. Not only have all project documents been made publicly available via the dedicated Guardian Katine website, but resident and visiting journalists have posted more than 540 stories about the  people, the place and the project. These stories provide an invaluable in-depth and dynamic picture of what has been happening in Katine, unparalleled by anything else I have seen in any other development aid project.

On the flip side, the project is all too common in the kind of design and implementation problems that have been experienced, along with its fair share of unpredictable and very influential external events, including dramatic turn-arounds in various government policies. Plus the usual share of staffing and contracting problems.

Right now the project has completed its third year of operation and is now heading into the fourth and final year, one more year than originally planned.

I have a major concern. It is during this final year that there will be more knowledge about the project available than ever before, but at the same time its donors, and perhaps various staff within AMREF, will be becoming more interested in other new events appearing over the horizon. For example, the Guardian will cease its intensive journalistic coverage of the project from this month, and attention is now focusing on their new international development website

So, I would like to pose an important challenge to all the visitors to the Monitoring and Evaluation NEWS website, and the associated MandE NEWS email list:

How can the 540+ stories be put to good use? Is there some form of analysis that could be made of their contents, that would help AMREF, the Guardian, Barclays, the people of Katine, and all of us learn more from the Katine project?

In order to help I have uploaded an Excel file listing all the stories since December 2008, with working hypertext links. I will try to progressively extend this list back to the start of the project in late 2007. This list includes copies of all progress reports, review and planning documents that  AMREF has given the Guardian to be uploaded onto their website.

If you have any questions or comments please post them below, as Comments to this posting, in the first instance.

What would be useful in the first instance is ideas about plans or strategies for analysing the data. Then volunteers to actually implement one or more of these plans.

PS: My understanding is that the data is by definition already in the public domain, and therefore anyone could make use of it. However, that use should be fair and not for profit. What we should be searching for here are lessons or truths in some form that could be seen as having wider applicability, which are based on sound argument and good evidence, as much as is possible.

Do we need a Minimum Level of Failure (MLF)?

This is the title of a new posting on the Rick on the Road blog, the editorial arm of Monitoring and Evaluation NEWS. It argues that improving aid effectiveness by identifying and culling out the worst performers is a different and possibly more appropriate strategy than identifying and replicating the best performers. This argument ties directly into the debate about RCTs, which are considered by some as the best means for improving aid effectiveness.

PS 22 October: Kirstin Hinds of the DFID Evaluation Department has pointed out (in a reply to my original blog posting) that DFID has published a more recent independent review of project completion reports (covering 2005-2008) which may be of interest.

Other recent postings on the Rick on the Road blog include:

A full list of all editorial posts is available here.

The Clash of the Counter-bureaucracy and Development

“In this essay, Andrew Natsios describes what he sees as the most disruptive obstacles to development work in agencies such as USAID: layers and layers of bureaucracy. He gives a first-hand account of how this “counter-bureaucracy” disfigures USAID’s development practice and even compromises U.S. national security objectives. Most of all, he argues, the counter-bureaucracy’s emphasis on easy measurement is at odds with the fact that transformational programs are often the least measurable and involve elements of risk and uncertainty.

To overcome counter-bureaucracy barriers, Natsios suggests implementing a new measurement system, reducing the layers of oversight and regulation, and aligning programmatic goals with organizational incentives. Unless policymakers address the issue, he says, U.S. aid programs will be unable to implement serious development programs while complying with the demands of Washington.”

Revised 07-13-2010

See also “BEYOND SUCCESS STORIES: MONITORING & EVALUATION FOR FOREIGN ASSISTANCE RESULTS Posted on 8 June, 2009

The Big Push Back (and push forward)

“On the 22nd September, Rosalind Eyben organised a meeting of some seventy development practitioners and researchers worried about the current trend for funding organisations to support only those programmes designed to deliver easily measurable results, although these may not support transformative processes of positive and sustainable changes in people’s lives.

Following on from a major conference in May in the Netherlands about evaluative practices in relation to social transformation (http://evaluationrevisited.wordpress.com/), the meeting took
the first steps in strategizing collectively in support of these practices”  Attached is Rosalind’s brief report of the meeting.

PS: 11 October 2010. See the latest posting by Ros Eyben on this topic here, on the Hauser Centre blog

“Instruments, Randomization and Learning about Development”

Angus Deaton,  Research Program in Development Studies, Center for Health and Wellbeing, Princeton University, March, 2010 Full text as pdf

ABSTRACT
There is currently much debate about the effectiveness of foreign aid and about what kind of projects can engender economic development. There is skepticism about the ability of econometric analysis to resolve these issues, or of development agencies to learn from their own experience. In response, there is increasing use in development economics of randomized controlled trials (RCTs) to accumulate credible knowledge of what works, without over-reliance on questionable theory or statistical methods. When RCTs are not possible, the proponents of these methods advocate quasi-randomization through instrumental variable (IV) techniques or natural experiments. I argue that many of these applications are unlikely to recover quantities that are useful for policy or understanding: two key issues are the misunderstanding of exogeneity, and the handling of heterogeneity. I illustrate from the literature on aid and growth. Actual randomization faces similar problems as does quasi-randomization, notwithstanding rhetoric to the contrary. I argue that experiments have no special ability to produce more credible knowledge than other methods, and that actual experiments are frequently subject to practical problems that undermine any claims to statistical or epistemic superiority. I illustrate using prominent experiments in development and elsewhere. As with IV methods, RCT-based evaluation of projects, without guidance from an understanding of underlying mechanisms, is unlikely to lead to scientific progress in the understanding of economic development. I welcome recent trends in development experimentation away from the evaluation of projects and towards the evaluation of theoretical mechanisms.

See also Why Works? by Lawrence Hadded, Development Horizons blog

See also  Carlos Baharona’s Randomised Control Trials for the Impact Evaluation of Development Initiatives: A Statistician’s Point of View. Introduction: This [ILAC Working Paper]  paper contains the technical and practical reflections of a statistician on the use of Randomised Control Trial designs (RCT) for evaluating the impact of development initiatives. It is divided into three parts. The first part discusses RCTs in impact evaluation, their origin, how they have developed and the debate that has been generated in the evaluation circles. The second part examines difficult issues faced in applying RCT designs to the impact evaluation of development initiatives, to what extent this type of design can be applied rigorously, the validity of the assumptions underlying RCT designs in this context, and the opportunities and constraints inherent in their adoption. The third part discusses the some of the ethical issues raised by RCTs, the need to establish ethical standards for studies about development options and the need for an open mind in the selection of research methods and tools.

%d bloggers like this: