Stakeholder analysis and social network analysis in natural resource management

Christina Prell, Klaus Hubacekb, Mark Reed, Department of Sociological Studies, University of Sheffield and  Sustainability Research Institute, School of Earth and Environment, University of Leeds, 2009

Full text here

Introduction

Many conservation initiatives fail because they pay inadequate attention to the interests and characteristics of stakeholders. (Grimble and Wellard, 1997). As a consequence, stakeholder analysis has gained increasing attention and is now integral to many participatory natural resource management initiatives (Mushove and Vogel, 2005). However, there are a number of important limitations to current methods for stakeholder analysis. For example, stakeholders are usually identified and categorized through a subjective assessment of their relative power, influence and legitimacy (Mitchell et al., 1997; Frooman, 1999). Although a wide variety of categorization schemes have emerged from the literature (such as primary and secondary (Clarkson, 1995), actors and those acted upon (Mitchell et al., 1997); strategic and moral (Goodpaster, 1991); and generic and specific (Carroll, 1989) methods have often overlooked the role communication networks can play in categorizing and understanding stakeholder relationships. Social network analysis (SNA) offers one solution to these limitations.

Environmental applications of SNA are just beginning to emerge, and so far have focused on understanding characteristics of social networks that increase the likelihood of collective action and successful natural resource management (Schneider et al., 2003; Tomkins and Adger, 2004; Newman and Dale, 2004; Bodin et al., 2006; Crona and Bodin, 2006). In this paper, we harness and expand upon this knowledge to inform stakeholder analysis for participatory natural resource management. By participatory natural resource management we mean a process that engages stakeholders on multiple levels of decision making and facilitates the formation and strengthening of relationships among stakeholders for mutual learning (Grimble and Wellard, 1997; Dougill et al., 2006; Stringer et al., 2006). To enhance stakeholder analysis, we use SNA to identify the role and influence of different stakeholders and categories of stakeholder according to their positions within the network. We do this using case study material from the Peak District National Park, UK.

What should be found within an M&E framework / plan?

I was asked this question by a client some time ago. After some thinking about something that I felt I should have already known, I drafted up a one page guidance note for my client. The contents of the note also benefited from a discussion about appropriate expectations about M&E frameworks with other M&E people on the MandE NEWS email list

I have attached the one page guidance note here: What should be found in an M&E Framework / Plan?

Please feel free to post your comments on this document below. And to suggest any other documents or websites where this topic is covered.

PS: 28 October 2011: This one-pager contains a summary of the proposed contents of an M&E Framework for a DFID project, prepared this year

PS: 12 February 2014: Benedictus Dwiagus Stepantoro has sent me this link to the DFAT (was AusAID) Monitoring and Evaluation standards that were updated in 2013. He points especially  to standard no.2 on Initiative M&E System there, and comments:

” I use it all the time as reference in checking the quality of M&E system in program/project/initiative, as I often receive 3-5 M&E System/Plan documents every year to be assessed.

 The main key feature for an M&E system there are:

 – Should have an ‘evaluability assessment’, as basis for developing the M&E system.

– Have clarity on program outcome, key output, approach/modality and the logic around them

– Have Evaluation Questions, or Performance Key Questions/Indicators

– Methodology/Tools – including baseline

– Should have sufficient resource (people with right expertise, fund for M&E activities.etc)

– Scheduling of M&E activities

– Costing/Budget allocation for M&E

– Clear responsibility

….People often shows me a logframe or a matrix of indicator and proudly state that their program have an “M&E System”,… But,…. For me, .. A logframe alone, is not an M&E System. A matrix of Indicators alone, is not an M&E system”

Prediction Matrices

Update December 2014: This page is now a subsidiary section of the page on Predictive Models

Purpose: Prediction Matrices are for building and testing complex predictions

Suitability: Where interventions take place in multiple settings in parallel; where there is some variation in the ways those interventions are implemented across the different settings; and where there is some variation in the nature of local settings where the interventions take place. For example, a maternal health improvement project implemented by District Health Offices in different districts across Indonesia

The core idea: A useful prediction about large scale changes can be built up out of many small micro-judgements, using relative rather than absolute judgements

Caveat Emptor: This method was developed for use in Indonesia a few years ago, but never tested out in practice

Introduction

The Prediction Matrix is a relatively simple tool for developing and testing predictions of how different events are expected to lead to a particular outcome. This can be useful at two points in time

  • When retrospectively trying to assess how much various project activities have already contributed to a known outcome
  • When planning a package of activities that are expected to contribute to a desired outcome (which has a specific target)

The Prediction Matrix does this by developing a model of how a project works. This can then be used to generate a predicted outcome, which can then be compared to a known (or planned) outcome. If the predicted outcome fits the known outcome, the model can be said to be working, because it fits well with reality. If it does not fit well, then this signals to us that we have to revise our thinking about what causes the outcomes. This will have implications for evaluation findings (if used retrospectively) and the contents of plans for future activities (if used prospectively)

The Prediction Matrix has its limitations, which will be described. But it does have some advantages over some other simpler alternatives, such as a one-to-one cross-tabulation between an expected cause (present and absent) and a known outcome (present and absent). The problem with simple cross tabulations is that they leave out the possibility that the outcome may be the result of multiple causes, including causes a project has no control over.

The Prediction Matrix can be produced using Excel, projected onto a screen in a workshop setting. An example matrix format is shown below. It should be referred to, step by step, when reading instructions below on how to construct a model, and its predictions about outcomes.

Process for constructing the model

1. Identify the outcome that is of interest. For example, in the IMHEI project in Indonesa, this was the percentage of deliveries assisted by trained health staff. This was recognised as a proxy measure of improved maternal health.

2. Identify the locations where data is available on this outcome. Many locations are better than few. So if data is available at district as well as province level, break it down to district level. In the IMHEI project this data was available for four provinces.

• List the locations in row 10 (insert more if needed)

• Insert the data on known outcomes, for each of these locations in row 28

NB: If the Prediction Matrix is being used for planning purposes, the outcome data could be the  levels of coverage (in % terms)  expected by the end of the plan period

3. Identify the potential causes of differences these outcomes across the districts, including those causes the project can influence and others it cant influence

• List these in column labelled “Expected causes of the outcome“, in rows 11 to 20 (and add more rows if needed)

• Convert any negatively stated causes into positives E.g. from “geographic isolation”, to “proximity to regional capital”. So that all causes have the same direction of influence (i.e. helping to improve maternal health)

4. You may recognise that not all causes are equally important. Some might be expected to have a much bigger overall effect than others. You can build this view in to the model by allocating 100 “cause” points down the column on the left of the list of causes. Place many points in row for the cause you think will have a big overall effect, across the project as a whole. Place few points in the row for the cause you will have a relatively small overall impact, across the project as a whole. Make sure all listed causes have some points, or remove the cause that you don’t want to give any points to. Make sure all the points allocated add up to 100 (look at the Check Sum row at the bottom, row 22)

5. Now look at each cause, in turn. Look across the locations in the same row and identify where it is expected to have a big effect, a small effect, or no effect at all. Use 100 “cause” points to indicate where the effects are expected to be. Place many points in the location cell where a big effect is expected. Place few points where a little effect is expected; place no points where no effect is expected. Place equal points in all cells, if the effect is expected to be equal in all locations. But make sure the total number of points in the row = 100 (see Check Sum column on the far right).

6. When assigning points to each cause in each location, make use of all available and relevant information (statistics, reports, staff observations) that has any merit. It may be useful to make a note of which of these sources were used. Use the Right Click>Inert Comment function in Excel to record these for any cell

7. Go through the same process again, with each of the other each expected causes, working your way down each of the rows of causes.

8. As you do this, a cumulative point score will appear in row 24, for each location. The cells values will signify the predicted relative impact of all the causes on each location. Each cell value here = the sum of  (the value in each “district” cell above (created in step 5), multiplied by the % “cause” points you have already given to the cause in that row (created in step 4)). You can see the exact formula in this Excel file, by placing the cursor on one of the row 24 cells

9. Look at the graph that is shown below the matrix. The graph shows the relationship between two sets of figures in the model

• The predicted impact scores in row 25

• The known outcomes, in row 28

10. Also shown below in row 31, is a correlation figure, showing how well these two sets of figure correlate with each other 0.99 is a very high correlation, 0.11 is a very low correlation. I should state here that the example shown here is an imagined one. In practice a correlation of 0.94 is probably very unlikely.

11. If the two sets of figures are highly correlated, the model is fitting well with reality. If there is a weak or non-existent correlation, it has a poor fit

12. If the model does not fit with reality, then the cell values and the weightings of each cause can be changed, to produce a better fit. BUT this should be done carefully. In principle the choice of all cell values (which are the participant’s judgements) need to be accountable. That is, it should be possible to explain to non-participants why those values have been chosen, when compared to others in the same row. This is where the inserted evidence comments, mentioned above, will be useful.

Suggestion: When collecting the micro-judgements on the “cause” points to be allocated across the causes (step 4) and across the locations (step 5 – 7)  it would be best to obscure rows 24 and below, to prevent any emerging macro level trends from influencing the micro-judgements. Rows 24 and below could be revealed when all micro-judgements have been completed.

Commentary on the method

The Prediction Matrix is making use of subjective judgements and interpretations, but at the price of requiring those judgements to be transparent and accountable. So, if cell values are changed, to improve the fit of the model with reality, then the reasons for those changes need to be clearly explained.

Behind the design of the Prediction Matrix are some important assumptions:

Two assumptions are related to large scale programs:

1. In large scale programs most outcomes of concern have multiple causes

2. The combination of causes that leads to a specific outcome is often context specific. They vary location to location

Two assumptions relate to how to build good models:

1. The more detailed a model is, the more vulnerable it is to disproof. Vulnerability to disproof is desirable, over time it should lead to improvement in the model. The models produced by the Prediction Matrix has two dimensions of detail:

• The number of causes (more are better)

• The number of locations where those causes may be present (more are better)

2. The more transparent a model is, the more vulnerable it is to disproof. Two aspects of the Prediction Matrix are transparent:

• The importance weightings given to each cause

• The relative impact weightings given in each location

What is not yet transparent in the Excel version of the Prediction Matrix, but which could be (via inserted Comments) are:

• The reasons given for different weightings to the causes

• The reasons given for different impact weightings

The limitations of the Prediction Matrix are:

• Explanations given for different cell values may not be based on very clear or substantial arguments and evidence

o This means that choices of cell values should be discussed and debated as much as possible, and well documented. And then exposed to external scrutiny. This is why it best to develop the Prediction Matrix in a workshop setting.

• A good fit between predicted and actual outcomes could be achieved by more than one set of cell values in the matrix. There may be more than one “solution”

o If this is found to be the case, in a particular real life application, then the important question is which of these sets of cell values can be best explained by the available evidence and argument.

When assessing the value of the Prediction Matrix it should be compared to other tools available or usable in the same context for the same purpose, not against an ideal standard that no one can meet.

 

Relationship to other methods

 

1. Realist Evaluation

The structure of a Prediction Matrix can be related to Pawson and Tilley’s concept of Context-Mechanism-Outcome configurations, in their school of Realist Evaluation. The Contexts are the district locations and values given in their cells to what could be called the mediating variables listed in rows 16 to 20. The Mechanism are the interventions (independent variables) listed in rows 11 to 15, and the values given to their cells in each location. The expected Outcome is in row 24.

When I shared  a description of the Prediction Matrix with Nick Tilley in 2006 he commented: “ I enjoyed this. It looks a useful tool. I like the corrigibility [i.e ability to be adjusted and improved]. I can see the fit with what david and I were saying.On a realist front I guess what might emerge are not underlying causal mechanisms but flags for them.

 

2. Qualitative Comparative Analysis (QCA)

This family of methods was developed by Charles Ragin. This method also involves looking a relatively small number of cases and how differences in their attributes relate to differences in observed outcomes. In contrast to the Prediction Matrix QCA matrices simply indicate the presence or absence of an attribute (via a 0 or 1), not its relative importance (via the ranking value). And instead of showing all locations as seperate entries, locations or incidences which have the same attributes are collapsed into one entry, with an additional attribute describing its frequency of occurence.  The process of then identifying the relationship between these different configurations and the presence/absence of the observed outcomes also differs. Through a process of comparison, facilitated by software, one or more combinations of attributes are found which can predict the observed outcomes.

PS: In Using Qualitative Comparative Analysis – (QCA) and Fuzzy Sets. Peer C. Fiss says “QCA is not useful in very small-N situations (e.g. less than 12 cases)” These are the circumstances where ranking is possible. Wendy Olsen says QCA is best for cases between 9 and 200

PS: Fuzzy Set QCA allows cases to have a degree of an attribute, not just an attribute or not.

Consultation Draft: “Better information: better aid” Accra, August 2008 

Produced by aidinfo. aidinfo is an initiative to contribute to faster poverty reduction by making aid more
transparent.

This is a draft for consultation that summarises the evidence we have gathered so far. We welcome suggestions, additions, comments and corrections.
Continue reading “Consultation Draft: “Better information: better aid” Accra, August 2008 “

INVITATION: Building the Evidence to Reduce Poverty – launch of the public consultation on DFID’s new Evaluation Policy

Date: Tuesday 9th December, 2.30pm
Venue: Department for International Development, Room 3W11, 1 Palace Street, London, SW1E 5HE

Chair: Sue Owen, Director General of Corporate Performance, DFID. With presentations from David Peretz, Chair of the Independent Advisory Committee on Development Impact; Nick York, Head of Evaluation Department, DFID

RSVP Kirsty Burns, Evaluation Department, Venue: kirsty-burns@dfid.gov.uk, 01355 84 3602, by Friday 5th December 2008

Background notes

Development is about achieving results that make a difference for the poor in their daily lives.
Evaluation is a key instrument both to inform decision makers and to hold DFID to account for its choices and actions.

The Independent Advisory Committee on Development Impact (IACDI) was established in December 2007, with members selected for their international development and evaluation expertise. Its formation was an important step forward towards strengthening evaluation for DFID. It demonstrated that the UK Government is committed to independent, open, and transparent scrutiny of its development assistance.

The new policy comes at the end of the first year of IACDI’s oversight of DFID’s evaluation work.
It is vital that we also draw on the views of our delivery partners across the world, and this is why the draft policy, along with a proposed list of topics to focus evaluation on over the next three years, is being put out for public consultation.

This event marks the launch of the external consultation process, which will be open for 12 weeks. DFID will launch its final policy in March.

You and your organisation are invited to take part in the consultation process, beginning with this event. There you will have an opportunity to put questions to David Peretz, the Chair of the Independent Advisory Committee on Development Impact, as well as Sue Owen, DFID’s Director General for Corporate Performance and Nick York, DFID’s Head of Evaluation.

Please let us know promptly if you plan to attend or if a colleague will attend in your place. Names need to be provided to DFID security staff to ensure admission.

Further details will then be sent to those joining the event closer to the time.

Participatory Impact Assessment: a Guide for Practitioners

The Feinstein International Center has been developing and adapting participatory approaches to measure the impact of livelihoods based interventions since the early nineties. Drawing upon this experience, the guide aims to provide practitioners with a broad framework for carrying out project level Participatory Impact Assessments (PIA) of livelihoods interventions in the humanitarian sector. Other than in some health, nutrition, and water interventions in which indicators of project performance should relate to international standards, for many interventions there are no ‘gold standards’ for measuring project impact. This guide aims to bridge this gap by outlining a tried and tested approach to measuring the impact of livelihoods projects. The tools in the guide have been field tested over the past two years in a major research effort, funded by the Bill & Melinda Gates Foundation and involving five major humanitarian NGOs working across Africa.

Download a PDF copy of the guide here

Impact assessment: Drivers, dilemmas and deliberations

Prepared for Sightsavers International by Jennifer Chapman & Antonella Mancini
jenny.chapman@tiscali.co.uk antonella.mancini@blueyonder.co.uk 9 pages 10th April 2008

“This paper investigates key debates and issues around impact assessment and performance measurement for UK development NGOs. It was originally written for Sightsavers to stimulate debate and thinking among staff, Board and senior management team. This version has been amended to be relevant for a wider NGO audience. It is based on the authors’ many years experience, reading of key documents and 11 interviews with informants selected because they are inluential in these debates and/or they have first hand experience of trying to implement impact assessment or performance measurement systems within NGOs. The paper has been put together in a relatively short period of time and does not claim to be based on rigorous research.”

Glossary of Key Terms in Evaluation and Results Based Management

(via Xceval email list)

We are pleased to inform you that the Arabic version of the DAC Evaluation Network’s “Glossary of Key Terms in Evaluation and Results Based Management,” has been released. The glossary is now available in thirteen languages! The multilingual glossary serves to promote shared understandings and facilitate joint work in evaluation. The strong demand for new versions of the Glossary is an indication of its relevance for DAC members and other development partners around the world. The Arabic Glossary was produced in collaboration with the Islamic Development Bank and the African Development Bank.

You can find a video link in English and Arabic presenting the new glossary on our website. The interviews were held at the recent launch event at the African Development Bank. The Islamic Development Bank will make an official launch with the Arab co-ordination group later in the month of June.

The Secretariat

Monday Developments issue on NGO accountability

(via Niels Keijzer on the Pelikan email list)

The December 2007 issue of Monday Developments, a monthly magazine
published by InterAction (the largest coalition of NGOs in the United
States), explores key accountability issues for NGOs. Through various
angles, the issue looks into “(…) the conflicts organizations face with
scarce resources, demanding missions and the need to evaluate progress and
effectiveness.”

The articles include views on the topic from development donors, the
Humanitarian Accountability Project, the importance of listening for
accountability, implications for evaluation standards and practice,
downward accountability, …

You can download the magazine here:
http://www.interaction.org/files.cgi/6117_MDDec2007.pdf

%d bloggers like this: