Human Rights and Impact Assessment

Special Issue of Impact Assessment and Project Appraisal, Volume 31, Issue 2, 2013

  • Boele, Richard, and Christine Crispin. 2013. “What Direction for Human Rights Impact Assessments?” Impact Assessment and Project Appraisal 31 (2): 128–134. doi:10.1080/14615517.2013.771005.
  • Collins, Nina, and Alan Woodley. 2013. “Social Water Assessment Protocol: a Step Towards Connecting Mining, Water and Human Rights.” Impact Assessment and Project Appraisal 31 (2): 158–167. doi:10.1080/14615517.2013.774717.
  • Hanna, Philippe, and Frank Vanclay. 2013. “Human Rights, Indigenous Peoples and the Concept of Free, Prior and Informed Consent.” Impact Assessment and Project Appraisal 31 (2): 146–157. doi:10.1080/14615517.2013.780373.
  • ———. 2013b. “Human Rights and Impact Assessment.” Impact Assessment and Project Appraisal 31 (2): 85–85. doi:10.1080/14615517.2013.791507.
  • Sauer, Arn Thorben, and Aranka Podhora. 2013. “Sexual Orientation and Gender Identity in Human Rights Impact Assessment.” Impact Assessment and Project Appraisal 31 (2): 135–145. doi:10.1080/14615517.2013.791416.
  • Watson, Gabrielle, Irit Tamir, and Brianna Kemp. 2013. “Human Rights Impact Assessment in Practice: Oxfam’s Application of a Community-based Approach.” Impact Assessment and Project Appraisal 31 (2): 118–127. doi:10.1080/14615517.2013.771007.

See also Gabrielle Watson’s related blog posting: Trust but verify: Companies assessing their own impacts on human rights? Oxfam’s experience supporting communities to conduct human rights impact assessments

And docs mentioned in her post:

  • the United Nations Guiding Principles on Business and Human Rights in 2011
  • Oxfam’s community-based Human Rights Impact Assessment (HRIA) tool, Getting it Right,The tool was first tested in the Philippines, Tibet, the Democratic Republic of Congo, Argentina and Peru, and then improved. In 2010 and 2011, Oxfam supported local partner organizations to conduct community-based HRIAs with tobacco farmworkers in North Carolina and with mining-affected communities in Bolivia. In our experience, community-based HRIAs have: (1) built human rights awareness among community members, (2) helped initiate constructive engagement when companies have previously ignored community concerns, and (3) led to concrete actions by companies to address concerns.

Oxfam study of MONITORING, EVALUATION AND LEARNING IN NGO ADVOCACY

Findings from Comparative Policy Advocacy MEL Review Project

by Jim Coe and Juliette Majot | February 2013. Oxfam and ODI

Executive Summary & Full text available as pdf

“For organizations committed to social change, advocacy often figures as a crucial strategic element. How to assess effectiveness in advocacy is, therefore, important. The usefulness of Monitoring, Evaluation and Learning (MEL) in advocacy are subject to much current debate. Advocacy staff, MEL professionals, senior managers, the funding community, and stakeholders of all kinds are searching for ways to improve practices – and thus their odds of success – in complex and contested advocacy environments. This study considers what a selection of leading advocacy organizations are doing in practice. We set out to identify existing practice and emergent trends in advocacy-related MEL practice, to explore current challenges and innovations. The study presents perceptions of how MEL contributes to advocacy effectiveness, and reviews the resources and structures dedicated to MEL.

This inquiry was initiated, funded and managed by Oxfam America. The Overseas Development Institute (ODI) served an advisory role to the core project team, which included Gabrielle Watson of Oxfam America, and consultants Juliette Majot and Jim Coe. The following organizations participated in the inquiry:ActionAid International | Amnesty International | Bread for the World | CARE,USA |Greenpeace International | ONE | Oxfam America | Oxfam Great Britain | Sierra Club”

PROCESS TRACING: Oxfam’s Draft Protocol

Undated, but possibly 2012. Available as pdf

Background: “Oxfam GB has adopted a Global Performance Framework.  Among other things, this framework involves the random selection of samples of closing or sufficiently mature projects under six outcome areas each year and rigorously evaluating their performance.  These are referred to as Effectiveness Reviews.  Effectiveness Reviews carried out under the Citizen Voice and Policy Influencing thematic areas are to be informed by a research protocol based on process tracing, a qualitative research approach used by case study researchers to investigate casual inference.”

Oxfam is seeking feedback on this draft.    Please send your comments to PPAT@oxfam.org.uk

See also the related blog posting by Oxfam on the “AEA365 | A Tip-a-Day by and for Evaluators”website:

Rick Davies comment: While the draft protocol already includes six references on process tracing, I would recommend two more which I think are especially useful and recent:

  • Mahoney, James. “Mahoney, J. (2012). The Logic of Process Tracing Tests in the Social Sciences.  1-28.” Sociological Methods & Research XX(X) (March 2, 2012): 1–28. doi:10.1177/0049124112437709. http://smr.sagepub.com/content/early/2012/02/29/0049124112437709.full.pdf
    • Abstract: This article discusses process tracing as a methodology for testing hypotheses in the social sciences. With process tracing tests, the analyst combines preexisting generalizations with specific observations from within a single case to make causal inferences about that case. Process tracing tests can be used to help establish that (1) an initial event or process took place, (2) a subsequent outcome also occurred, and (3) the former was a cause of the
      latter. The article focuses on the logic of different process tracing tests, including hoop tests, smoking gun tests, and straw in the wind tests. New criteria for judging the strength of these tests are developed using ideas concerning the relative importance of necessary and sufficient conditions. Similarities and differences between process tracing and the deductive nomological model of explanation are explored.

 

Oxfam GB’s new Global Performance Framework + their Effectiveness Review reports

“As some of you will be aware, we have been working to develop and implement Oxfam GB’s new Global Performance Framework – designed to enable us to be accountable to a wide range of stakeholders and get better at understanding and communicating the effectiveness of a global portfolio comprised of over 250 programmes and 1,200 associated projects in 55 countries in a realistic, cost-effective, and credible way.  

The framework considers six core indicator areas for the organisation: humanitarian response, adaptation and risk reduction (ARR), livelihood enhancement, women’s empowerment, citizen voice, and polity influencing.  All relevant projects are required to report output data against these areas on an annual basis.  This – referred to as Global Output Reporting (GOR) – enables us to better understand and communicate the scale and scope of much of our work.  

To be fully accountable, however, we still want to understand and evidence whether all this work is bearing fruit.  We realise that this cannot be done by requesting all programmes to collect data against a global set of outcome indicators.   Such an exercise would be resource intensive and difficult to quality control.  Moreover, while it has the potential of generating interesting statistics, there would be no way of directly linking the observed outcome changes back to our work.  Instead, we drill down and rigorously evaluate random samples of our projects under each of the above thematic areas. We call these intensive evaluation processes Effectiveness Reviews.

The first year of effectiveness review reports are now up on the web, with our own Karl Hughes introducing the effort on the Poverty to Power blog today.  Here you will find introductory material, a summary of the results for 2011/12, two-page summaries of each effectiveness review, as well the full reports. Eventually, all the effectiveness reviews we carry out/commission will be available from this site, unless there are good reasons why they cannot be publicly shared, e.g. security issues.

Have a look, and please do send us your comments – either publically on the Poverty to Power blog or through this list serve, or bilaterally.  We very much value having ‘critical friends’ to help us think through and improve these processes.

Thanks,
Claire

Claire Hutchings
Global Advisor – Monitoring, Evaluation & Learning (Campaigns & Advocacy)
Programme Performance & Accountability Team
Oxfam GB
Work direct: +44 (0) 1865 472204
Skype: claire.hutchings.ogb

Can we obtain the required rigour without randomisation? Oxfam GB’s non-experimental Global Performance Framework

Karl Hughes, Claire Hutchings, August 2011. 3ie Working Paper 13. Available as pdf.

[found courtesy of @3ieNews]

Abstract

“Non-governmental organisations (NGOs) operating in the international development sector need credible, reliable feedback on whether their interventions are making a meaningful difference but they struggle with how they can practically access it. Impact evaluation is research and, like all credible research, it takes time, resources, and expertise to do well, and – despite being under increasing pressure – most NGOs are not set up to rigorously evaluate the bulk of their work. Moreover, many in the sector continue to believe that capturing and tracking data on impact/outcome indicators from only the intervention group is sufficient to understand and demonstrate impact. A number of NGOs have even turned to global outcome indicator tracking as a way of responding to the effectiveness challenge. Unfortunately, this strategy is doomed from the start, given that there are typically a myriad of factors that affect outcome level change. Oxfam GB, however, is pursuing an alternative way of operationalising global indicators. Closing and sufficiently mature projects are being randomly selected each year among six indicator categories and then evaluated, including the extent each has promoted change in relation to a particular global outcome indicator. The approach taken differs depending on the nature of the project. Community-based interventions, for instance, are being evaluated by comparing data collected from both intervention and comparison populations, coupled with the application of statistical methods to control for observable differences between them. A qualitative causal inference method known as process tracing, on the other hand, is being used to assess the effectiveness of the organisation’s advocacy and popular mobilisation interventions. However, recognising that such an approach may not be feasible for all organisations, in addition to Oxfam GB’s desire to pursue complementary strategies, this paper also sets out several other realistic options available to NGOs to step up their game in understanding and demonstrating their impact. These include: 1) partnering with research institutions to rigorously evaluate “strategic” interventions; 2) pursuing more evidence informed programming; 3) using what evaluation resources they do have more effectively; and 4) making modest investments in additional impact evaluation capacity.”

%d bloggers like this: