Human Rights and Impact Assessment

Special Issue of Impact Assessment and Project Appraisal, Volume 31, Issue 2, 2013

  • Boele, Richard, and Christine Crispin. 2013. “What Direction for Human Rights Impact Assessments?” Impact Assessment and Project Appraisal 31 (2): 128–134. doi:10.1080/14615517.2013.771005.
  • Collins, Nina, and Alan Woodley. 2013. “Social Water Assessment Protocol: a Step Towards Connecting Mining, Water and Human Rights.” Impact Assessment and Project Appraisal 31 (2): 158–167. doi:10.1080/14615517.2013.774717.
  • Hanna, Philippe, and Frank Vanclay. 2013. “Human Rights, Indigenous Peoples and the Concept of Free, Prior and Informed Consent.” Impact Assessment and Project Appraisal 31 (2): 146–157. doi:10.1080/14615517.2013.780373.
  • ———. 2013b. “Human Rights and Impact Assessment.” Impact Assessment and Project Appraisal 31 (2): 85–85. doi:10.1080/14615517.2013.791507.
  • Sauer, Arn Thorben, and Aranka Podhora. 2013. “Sexual Orientation and Gender Identity in Human Rights Impact Assessment.” Impact Assessment and Project Appraisal 31 (2): 135–145. doi:10.1080/14615517.2013.791416.
  • Watson, Gabrielle, Irit Tamir, and Brianna Kemp. 2013. “Human Rights Impact Assessment in Practice: Oxfam’s Application of a Community-based Approach.” Impact Assessment and Project Appraisal 31 (2): 118–127. doi:10.1080/14615517.2013.771007.

See also Gabrielle Watson’s related blog posting: Trust but verify: Companies assessing their own impacts on human rights? Oxfam’s experience supporting communities to conduct human rights impact assessments

And docs mentioned in her post:

  • the United Nations Guiding Principles on Business and Human Rights in 2011
  • Oxfam’s community-based Human Rights Impact Assessment (HRIA) tool, Getting it Right,The tool was first tested in the Philippines, Tibet, the Democratic Republic of Congo, Argentina and Peru, and then improved. In 2010 and 2011, Oxfam supported local partner organizations to conduct community-based HRIAs with tobacco farmworkers in North Carolina and with mining-affected communities in Bolivia. In our experience, community-based HRIAs have: (1) built human rights awareness among community members, (2) helped initiate constructive engagement when companies have previously ignored community concerns, and (3) led to concrete actions by companies to address concerns.

Assessing the impact of human rights work: Challenges and Choices

The International Council on Human Rights Policy has produced two documents under the above named project(See here for details of the project):

  • No Perfect Measure: Rethinking Evaluationand Assessment of Human Rights Work. Report of a Workshop, January 2012. Contents: Introduction and Context,,A Brief History,,NGO Hesitations, The Shift, Assessing the Impact of Policy Research, Impact Assessment in the context of Advocacy, Impact Assessment in the context of Capacity Building and Development, The Donor perspective, Third-Party Perspectives—Building a bridge, A note on integrating Human Rights Principles into development work, References, Selected Additional Bibliographic Resources
  • Role and Relevance of Human Rights Principles in Impact Assessment: An Approach Paper. July 2011. Contents: Introduction and Context, A Brief History, NGO Hesitations, The Shift, Assessing the Impact of Policy Research, Impact Assessment in the context of Advocacy
    Impact Assessment in the context of Capacity Building and Development
    The Donor perspective, Third-Party Perspectives—Building a bridge
    A note on integrating Human Rights Principles into development work
    References, Selected Additional Bibliographic Resources

PS 14 February 2012: It appears the ICHRP website is not working at present. I have uploaded a copy of the No Perfect Measure paper here

Good Enough Guide to Impact Measurement – Rapid Onset Natural Disasters

[from the Emergency Capacity Building Project website]

Published on 6 April 2011

The Department for International Development (DfID / UKAID) awarded a grant for the ECB Project to develop a new Good Enough Guide to Impact Measurement. Lead by Dr. Vivien Walden from Oxfam, a team of ECB specialists from CRS, Save the Children, and World Vision will work together with the British University of East Anglia (UEA).

This guide, and supporting capacity-building materials, will include the development of an impact measurement methodology for rapid onset natural disasters. The methodologies will be field tested by the editorial team in Pakistan and one other country location from September 2011 onwards.

The team welcomes suggestions and input on developing methodologies for impact measurement. Contact us with your ideas at

IMPACT AND AID EFFECTIVENESS: Mapping the Issues and their Consequences

[from the IDS Virtual Bulletin, March 2011]

In this virtual Bulletin we bring together ten articles dating from across three decades. They all address Impact. From the outset, we note that there are a number of common threads and ideas that stretch across all the articles:

  • The implicit emphasis of all the articles on complexity
  • The breadth and depth of impact analysis, from the national level to the individual
  • The importance of knowing the audience for any evaluation or impact assessment
  • The virtuous cycle that can be created by using insights into impact to adjust interventions
  • The dependency of that virtuous cycle on participation and engagement of programme staff and clients.

What we notice, however, is how the articles framing these issues vary according to discipline and research site. We also see how some ongoing preoccupations have been shaped by their proximity to other debates or policy concerns. Our hope is that hindsight will provide some perspective for practice and policy going forward.
View Full Introduction

A Revolution Whose Time Has Come? The Win-Win of Quantitative Participatory Approaches and Methods
IDS Bulletin Volume 41, Issue 6, November 2010
Robert Chambers

Impact of Microfinance on Rural Households in the Philippines
IDS Bulletin Volume 39, Issue 1, March 2008
Toshio Kondo, Aniceto Orbeta, Clarence Dingcong and Christine Infantado

You Can Get It If You Really Want’: Impact Evaluation Experience of the Office of Evaluation and Oversight of the Inter-American Development Bank
IDS Bulletin Volume 39, Issue 1, March 2008
Inder Jit Ruprah

The Role of Evaluation in Accountability in Donor-Funded Projects
IDS Bulletin Volume 31, Issue 1, January 2000
Adebiyi Edun

Micro-Credit Programme Evaluation: A Critical Review†
IDS Bulletin Volume 29, Issue 4, October 1998
Shahidur R. Khandker

Macroeconomic Evaluation of Programme Aid: A Conceptual Framework
IDS Bulletin Volume 27, Issue 4, October 1996
Howard White

Measurement of Poverty and Poverty of Measurement
IDS Bulletin Volume 25, Issue 2, April 1994
Martin Greeley

Developing Effective Study Programmes for Public Administrators
IDS Bulletin Volume 8, Issue 4, May 2009
Ron Goslin

Improving the Effectiveness of Evaluation in Rural Development Projects
IDS Bulletin Volume 8, Issue 1, July 1976
B. H. Kinsey

Managing Rural Development
IDS Bulletin, Volume 6, Issue 1, September 1974
Robert Chambers

Participatory Impact Assessment: A guide for practitioners

Andrew Catley – John Burns – Dawit Abebe – Omeno Suji, Feintein International Centre, Tufts University, 2008. Available as pdf

“Purpose of this guide

The Feinstein International Center has been developing and adapting participatory approaches to measure the impact of livelihoods based interventions since the early nineties. Drawing upon this experience, this guide aims to provide practitioners with a broad framework for carrying out project level Participatory Impact Assessments (PIA) of livelihoods interventions in the humanitarian sector. Other than in some health, nutrition, and water interventions in which indicators of project performance should relate to international standards, for many interventions there are no ‘gold standards’ for measuring project impact. For example, the Sphere handbook has no clear standards for food security or livelihoods interventions. This guide aims to bridge this gap by outlining a tried and tested approach to measuring the impact of livelihoods projects. The guide does not attempt to provide a set of standards or indicators or blueprint for impact assessment, but a broad and flexible framework which can be adapted to different contexts and project interventions.

Consistent with this, the proposed framework does not aim to provide a rigid or detailed step by step formula, or set of tools to carry out project impact assessments, but describes an eight stage approach, and presents examples of tools which may be adapted to different contexts. One of the  objectives of the guide is to demonstrate how PIA can be used to overcome some of the inherent weaknesses in conventional humanitarian monitoring evaluation and impact assessment approaches, such as; the emphasis on measuring process as opposed to real impact, the emphasis on external as opposed to community based indicators of impact, and how to overcome the issue of weak or non-existent baselines. The guide also aims to demonstrate and provide examples of how participatory methods can be used to overcome the challenge of attributing impact or change to actual project activities. The guide will also demonstrate how data collected from the systematic use of participatory tools can be presented numerically, and can give representative results and provide evidence based data on project impact.

Objectives of the Guide

1. Provide a framework for assessing the impact of livelihoods interventions

2. Clarify the differences between measuring process and real impact

3. Demonstrate how PIA can be used to measure the impact of different projects in different contexts using community identified impact indicators

4. Demonstrate how participatory methods can be used to measure impact where no baseline data exists

5. Demonstrate how participatory methods can be used to attribute impact to a project

6. Demonstrate how qualitative data from participatory tools can be systematically”

Five challenges facing impact evaluation

PS 2018 02 23: The original NONIE Meeting 2001 website is no longer in existence. Use this reference, if needed: White, H. (2011) ‘Five challenges facing impact evaluation on NONIE’ (

“There has been enormous progress in impact evaluation of development interventions in the last five years. The 2006 CGD report When Will be Ever Learn? claimed that there was little rigorous evidence of what works in development. But there has been a huge surge in studies since then. By our count, there are over 800 completed and on-going impact evaluations of socio-economic development interventions in low and middle-income countries.

But this increase in numbers is just the start of the process of ‘improving lives through impact evaluation’, which was the sub-title of the CGD report and has become 3ie’s vision statement. Here are five major challenges facing the impact evaluation community:

1. Identify and strengthen processes to ensure that evidence is used in policy: studies are not an end in themselves, but a means to the end of better policy, programs and projects, and so better lives. At 3ie we are starting to document cases in which impact evaluations have, and have not, influenced policy to better understand how to go about this. DFID now requires evidence to be provided to justify providing support to new programs, an example which could be followed by other agencies.

2. Institutionalize impact evaluation: the development community is very prone to faddism. Impact evaluation could go the way of other fads and fall into disfavour. We need to demonstrate the usefulness of impact evaluation to help prevent this happening , hence my first point. But we also need take steps to institutionalize the use of evidence in governments and development agencies. This step includes ensuring that ‘results’ are measured by impact, not outcome monitoring.

3. Improve evaluation designs to answer policy-relevant questions: quality impact evaluations embed the counterfactual analysis of attribution in a broader analysis of the causal chain, allowing an understanding of why interventions work, or not, and yielding policy relevant messages for better design and implementation. There have been steps in this direction, but researchers need better understanding of the approach and to genuinely embrace mixed methods in a meaningful way.

4. Make progress with small n impact evaluations: we all accept that we should be issues-led not methods led, and use the most appropriate method for the evaluation questions at hand. But the fact is that there is far more consensus for the evaluation of large n interventions, in which experimental and quasi-experimental approaches can be used, then there is about the approach to be used for small n interventions. If the call to base development spending on evidence of what works is to be heeded, then the development evaluation community needs to move to consensus on this point.

5. Expand knowledge and use of systematic reviews: single impact studies will also be subject to criticisms of weak external validity. Systematic reviews, which draw together evidence from all quality impact studies of a particular intervention in a rigorous manner, give stronger, more reliable, messages. There has been an escalation in the production of systematic reviews in development in the last year. The challenge is to ensure that these studies are policy relevant and used by policy makers.”

Social assessment of conservation initiatives: A review of rapid methodologies

Kate Schreckenberg, Izabel Camargo, Katahdin Withnall, Colleen Corrigan, Phil Franks, Dilys Roe, Lea M. Scherl and Vanessa Richardson.
Published: May 2010 – IIED, London, 124 pages


“Areas of land and sea are increasingly being marked out for protection in response to various demands: to tackle biodiversity loss, to prevent deforestation as a climate change mitigation strategy, and to restore declining fisheries. Amongst those promoting biodiversity conservation, the impacts of protected areas on resident or neighbouring communities have generated much debate, and this debate is raging further as new protection schemes emerge, such as REDD.

Despite widely voiced concerns about some of the negative implications of protected areas, and growing pressures to ensure that they fulfil social as well as ecological objectives, no standard methods exist to assess social impacts. This report aims to provide some.

Some 30 tools and methods for assessing social impacts in protected areas and elsewhere are reviewed in this report, with a view to understanding how different researchers have tackled the various challenges associated with impact assessment. This experience is used to inform a framework for a standardised process that can guide the design of locally appropriate assessment methodologies. Such a standard process would facilitate robust, objective comparisons between sites as well as assisting in the task of addressing genuine concerns and enhancing potential benefits.”

Available as pdf and as printed hard copy

Joint Humanitarian Impact Evaluation: Report on consultations

Report for the Inter-Agency Working Group on Joint Humanitarian Impact
. Tony Beck  January 2011

” Background and purpose

Since the Tsunami Evaluation Coalition there have been ongoing discussions concerning mainstreaming joint impact evaluation within the humanitarian system. With pressure to demonstrate that results are being achieved by humanitarian action, the question has arisen as to whether and how evaluations can take place that will assess joint impact. An Inter-Agency Working Group was established in November 2009 to manage and facilitate consultations on the potential of Joint Humanitarian Impact Evaluation (JHIE). It was agreed to hold a series of consultations between February and November 2010 to define feasible approaches to joint impact evaluation in humanitarian action, which might subsequently be piloted in one to two humanitarian contexts.

Consultations were held with a representative cross section of humanitarian actors: the affected population in 15 communities in Sudan, Bangladesh and Haiti, and local government and local NGOs in the same countries; with national government and international humanitarian actors in Haiti and Bangladesh; and with 67 international humanitarian actors, donors, and evaluators in New York, Rome, Geneva, London and Washington. This is perhaps the most systematic attempt to consult with the affected population during the design phase of a major evaluative exercise. This report details the results from the consultations.”

INTRAC workshop: Accountability without Impact?

Date: Date: 23 November 2010
Venue: Venue: St Anne’s College, Oxford, UK

There are many debates about the ‘So what?’ question, in terms of concerns about the actual impact of international cooperation. What is the development sector actually achieving in terms of improving the lives of the poor? Have we focused on proving accountability without truly pursuing ways to assess impact? What can we do about this?

This workshop will draw on practitioner experiences to assess the state of the current debate, asking where we are now; explore forward-thinking case studies; and facilitate productive discussion and debate about where we want to be, and how to move towards that. The workshop will be attended by senior INGO managers and policy makers, from Europe.

For further information click here . Contact:

US Office of Management and Budget: Increased emphasis on Program Evaluations

Via Xceval: No exactly breaking news (11 months later), but still likely to be of wide interest:

October 7, 2009
FROM: Peter R. Orszag
SUBJECT: Increased Emphasis on Program Evaluations

Rigorous, independent program evaluations can be a key resource in determining whether government programs are achieving their intended outcomes as well as possible and at the lowest possible cost. Evaluations can help policymakers and agency managers strengthen the design and operation of programs. Ultimately, evaluations can help the Administration determine how to spend taxpayer dollars effectively and efficiently — investing more in what works and less in what does not.
Although the Federal government has long invested in evaluations, many important programs have never been formally evaluated — and the evaluations that have been done have not sufficiently shaped Federal budget priorities or agency management practices. Many agencies lack an office of evaluation with the stature and staffing to support an ambitious, strategic, and relevant research agenda. As a consequence, some programs have persisted year after year without adequate evidence that they work. In some cases, evaluation dollars have flowed into studies of insufficient rigor or policy significance. And Federal programs have rarely evaluated multiple approaches to the same problem with the goal of identifying which ones are most effective.

To address these issues and strengthen program evaluation, OMB will launch the following government-wide efforts as part of the Fiscal Year 2011 Budget process: ….(read the full text in this pdf)

%d bloggers like this: