Symposium: “Getting to Results: Evaluation Capacity Building and Development”

International Development Evaluation Association

Date: March 17-20, 2009

Venue: Johannesburg, South Africa)

CALL FOR SUBMISSION OF PAPER PROPOSALS.

Submission Deadline: January 12, 2009

Please note that scholarships for individuals from developing or transition countries are available.

Introduction:

The Board of the International Development Evaluation Association (IDEAS) is pleased to announce its next Global Assembly on March 18-20, 2009 in Johannesburg, South Africa preceded by professional development training sessions on March 17. The theme of the assembly will be on evaluation capacity building and its role in development.

The Assembly will focus on the issues involved in evaluation capacity building, how such efforts can strengthen the evidence available to organizations and countries to inform their own development, and what we know of good practices in this area. Capacity building has been recognized now for a decade or more as crucial to development. The measurement (and management) issues embedded in generating and disseminating evaluative information are now understood to be critical to informing decision making. This conference will explore these topics with the intent to clarify present knowledge on evaluation capacity building, learn of what is working well (or not), and what are the challenges in taking these promising efforts forward. The intention is to inform the results agenda within the development context.

The theme of this coming global assembly underscores the role that evaluative knowledge can play in development in general, and more particularly, how to build and sustain the capacity to bring evaluative knowledge into the decision making process so as to enhance the achievement of results. Thus, the theme of evaluation capacity building encompasses issues of knowledge creation, knowledge transmission, knowledge synthesis, and sustainability. Continue reading “Symposium: “Getting to Results: Evaluation Capacity Building and Development””

MAPPING OF MONITORING AND EVALUATION PRACTICES AMONG DANISH NGOS

Final Report May 2008, Hanne Lund Madsen, HLM Consult

As a first step in the follow-up to the Assessment of the Administration of the Danish NGO Support, the Evaluation Department of the Ministry in cooperation with the Quality Assurance Department and the NGO Department wished to map the existing evaluation and monitoring practices among the Danish NGOs with a view to establishing the basis for a later assessment of how Danida can systematize the use of results, measurements and evaluations within the NGO sector.

The mapping has entailed the consideration of M&E documentation from 35 NGOs, bilateral consultation with 17 NGOs, interviews with other stakeholders within the Ministry, the Danish Resource base, Projektrådgivningen and a mini-seminar with Thematic Forum.

International Course on ‘Participatory Planning, Monitoring & Evaluation. Managing and Learning for Impact”

Date: 02-20  March 2009,
Venue: Wageningen, The Netherlands

This course is organised by Wageningen International, part of Wageningen University and Research Centre. The course focuses on how to design and institutionalise participatory planning and M&E systems in projects, programmes and organisations for continuous learning and enhanced performance. Particular attention is paid to navigating and managing for impact and to the relationship between management information needs and responsibilities and the planning and M&E functions. For more info please visit our website: http://www.cdic.wur.nl/UK/newsagenda/agenda Participatory_planning_monitoring_and_evaluation.htm

or contact us: training.wi@wur.nl or cecile.kusters@wur.nl

Participants are coming from all over the world, both government, NGO and academic sector, and mainly in management positions or M&E functions. You are most welcome to join this selective group!

Kind regards / Hartelijke groeten,

Cecile Kusters
Participatory Planning, Monitoring & Evaluation
Multi-Stakeholder Processes and Social Learning
Wageningen UR, Wageningen International
P.O.Box 88, 6700 AB Wageningen, the Netherlands
Visiting address: Lawickse Allee 11,Building 425, 6701 AN Wageningen, The Netherlands
Tel.  +31 (0)317- 481407
Fax. +31 (0)317- 486801
e-mail: cecile.kusters@wur.nl
Website: www.cdic.wur.nl/UK
PPME resource portal: http://portals.wi.wur.nl/ppme
MSP resource portal: http://portals.wi.wur.nl/msp/
www.disclaimer-uk.wur.nl

The Logic Model Guidebook: Better Strategies for Great Results

Authors Lisa Wyatt Knowlton (Ed.D.) and Cynthia C. Phillips (Ph.D.) Published October 2008. Available on Amazon.  Recommended by Jim Rugh

Excerpt: “We approach logic models as important thinking and inquiry tools, and logic modeling as a process that contributes to clarity about a sequence of interactive relationships. Logic models display relationships of many kinds: between resources and activities, activities and outcomes, outcomes and impact. This display provides an opportunity to review critically the logic of these relationships and their content. Are the underlying assumptions, choices, and relationships sensible, plausible, feasible, measurable? Logic models assist strategy and contribute to performance management through discovery of the most effective means to a specific results… The modeling process includes a cycle of display, review, analysis, critique and revision to improve the model. These actions steps, tackled with colleagues or stakeholders, can contribute significantly to more informed displays and, ultimately, more successful programs and projects.
Continue reading “The Logic Model Guidebook: Better Strategies for Great Results”

Complexity in Aid Workshop series: Strategy in a complex world

Date: January 14, 2009, all day (9-5)
Venue: CAFOD offices in London (Stockwell).

The world is becoming increasingly inter-related, complex and fast-changing and yet many organisations continue to use traditional methods for strategy development, organisation change and leadership – even when they have questionable success. Why is this? What has to happen for strategists and policy makers to give up on behaving as if the world is predictable, measurable controllable? And what should be done instead?

In this latest workshop of the emerging community on “Complexity in Aid” we will review the paradox of complexity and see what it means for organisational and strategic approaches; we will consider how to get people engaged in these ideas and what complexity thinking implies for practice.

The workshop will be led by Dr Jean Boulton. Jean has a PhD in physics and designed and led the teaching on complexity for several years at Cranfield School of Management; she now teaches complexity on the MSc in Responsible Business Practice at Bath School of Management and works
with organisations in the areas of strategy and organisation change; she is currently co-authoring a book, ‘Embracing Complexity’, with Professor Peter Allen, to be published by Oxford University Press in 2009. See www.embracingcomplexity.co.uk

There will be plenty of opportunity during the workshop for discussion and consideration of how these ideas challenge current methods of strategic planning and implementation; we will look at the balance between the formal and informal, the espoused and the actual.
With this is mind, you might like to consider the following questions:
*       How is strategy developed in your organisation? To what extent does it shape practice? How do you know?
*       What ways, formal or informal, global or local, really impact on what actually is done by your organisation? What, in practice, has most influence on the  direction the organisation travels?

Places are limited. If you would like to attend please email
learning@cafod.org.uk

Prediction Matrices

Update December 2014: This page is now a subsidiary section of the page on Predictive Models

Purpose: Prediction Matrices are for building and testing complex predictions

Suitability: Where interventions take place in multiple settings in parallel; where there is some variation in the ways those interventions are implemented across the different settings; and where there is some variation in the nature of local settings where the interventions take place. For example, a maternal health improvement project implemented by District Health Offices in different districts across Indonesia

The core idea: A useful prediction about large scale changes can be built up out of many small micro-judgements, using relative rather than absolute judgements

Caveat Emptor: This method was developed for use in Indonesia a few years ago, but never tested out in practice

Introduction

The Prediction Matrix is a relatively simple tool for developing and testing predictions of how different events are expected to lead to a particular outcome. This can be useful at two points in time

  • When retrospectively trying to assess how much various project activities have already contributed to a known outcome
  • When planning a package of activities that are expected to contribute to a desired outcome (which has a specific target)

The Prediction Matrix does this by developing a model of how a project works. This can then be used to generate a predicted outcome, which can then be compared to a known (or planned) outcome. If the predicted outcome fits the known outcome, the model can be said to be working, because it fits well with reality. If it does not fit well, then this signals to us that we have to revise our thinking about what causes the outcomes. This will have implications for evaluation findings (if used retrospectively) and the contents of plans for future activities (if used prospectively)

The Prediction Matrix has its limitations, which will be described. But it does have some advantages over some other simpler alternatives, such as a one-to-one cross-tabulation between an expected cause (present and absent) and a known outcome (present and absent). The problem with simple cross tabulations is that they leave out the possibility that the outcome may be the result of multiple causes, including causes a project has no control over.

The Prediction Matrix can be produced using Excel, projected onto a screen in a workshop setting. An example matrix format is shown below. It should be referred to, step by step, when reading instructions below on how to construct a model, and its predictions about outcomes.

Process for constructing the model

1. Identify the outcome that is of interest. For example, in the IMHEI project in Indonesa, this was the percentage of deliveries assisted by trained health staff. This was recognised as a proxy measure of improved maternal health.

2. Identify the locations where data is available on this outcome. Many locations are better than few. So if data is available at district as well as province level, break it down to district level. In the IMHEI project this data was available for four provinces.

• List the locations in row 10 (insert more if needed)

• Insert the data on known outcomes, for each of these locations in row 28

NB: If the Prediction Matrix is being used for planning purposes, the outcome data could be the  levels of coverage (in % terms)  expected by the end of the plan period

3. Identify the potential causes of differences these outcomes across the districts, including those causes the project can influence and others it cant influence

• List these in column labelled “Expected causes of the outcome“, in rows 11 to 20 (and add more rows if needed)

• Convert any negatively stated causes into positives E.g. from “geographic isolation”, to “proximity to regional capital”. So that all causes have the same direction of influence (i.e. helping to improve maternal health)

4. You may recognise that not all causes are equally important. Some might be expected to have a much bigger overall effect than others. You can build this view in to the model by allocating 100 “cause” points down the column on the left of the list of causes. Place many points in row for the cause you think will have a big overall effect, across the project as a whole. Place few points in the row for the cause you will have a relatively small overall impact, across the project as a whole. Make sure all listed causes have some points, or remove the cause that you don’t want to give any points to. Make sure all the points allocated add up to 100 (look at the Check Sum row at the bottom, row 22)

5. Now look at each cause, in turn. Look across the locations in the same row and identify where it is expected to have a big effect, a small effect, or no effect at all. Use 100 “cause” points to indicate where the effects are expected to be. Place many points in the location cell where a big effect is expected. Place few points where a little effect is expected; place no points where no effect is expected. Place equal points in all cells, if the effect is expected to be equal in all locations. But make sure the total number of points in the row = 100 (see Check Sum column on the far right).

6. When assigning points to each cause in each location, make use of all available and relevant information (statistics, reports, staff observations) that has any merit. It may be useful to make a note of which of these sources were used. Use the Right Click>Inert Comment function in Excel to record these for any cell

7. Go through the same process again, with each of the other each expected causes, working your way down each of the rows of causes.

8. As you do this, a cumulative point score will appear in row 24, for each location. The cells values will signify the predicted relative impact of all the causes on each location. Each cell value here = the sum of  (the value in each “district” cell above (created in step 5), multiplied by the % “cause” points you have already given to the cause in that row (created in step 4)). You can see the exact formula in this Excel file, by placing the cursor on one of the row 24 cells

9. Look at the graph that is shown below the matrix. The graph shows the relationship between two sets of figures in the model

• The predicted impact scores in row 25

• The known outcomes, in row 28

10. Also shown below in row 31, is a correlation figure, showing how well these two sets of figure correlate with each other 0.99 is a very high correlation, 0.11 is a very low correlation. I should state here that the example shown here is an imagined one. In practice a correlation of 0.94 is probably very unlikely.

11. If the two sets of figures are highly correlated, the model is fitting well with reality. If there is a weak or non-existent correlation, it has a poor fit

12. If the model does not fit with reality, then the cell values and the weightings of each cause can be changed, to produce a better fit. BUT this should be done carefully. In principle the choice of all cell values (which are the participant’s judgements) need to be accountable. That is, it should be possible to explain to non-participants why those values have been chosen, when compared to others in the same row. This is where the inserted evidence comments, mentioned above, will be useful.

Suggestion: When collecting the micro-judgements on the “cause” points to be allocated across the causes (step 4) and across the locations (step 5 – 7)  it would be best to obscure rows 24 and below, to prevent any emerging macro level trends from influencing the micro-judgements. Rows 24 and below could be revealed when all micro-judgements have been completed.

Commentary on the method

The Prediction Matrix is making use of subjective judgements and interpretations, but at the price of requiring those judgements to be transparent and accountable. So, if cell values are changed, to improve the fit of the model with reality, then the reasons for those changes need to be clearly explained.

Behind the design of the Prediction Matrix are some important assumptions:

Two assumptions are related to large scale programs:

1. In large scale programs most outcomes of concern have multiple causes

2. The combination of causes that leads to a specific outcome is often context specific. They vary location to location

Two assumptions relate to how to build good models:

1. The more detailed a model is, the more vulnerable it is to disproof. Vulnerability to disproof is desirable, over time it should lead to improvement in the model. The models produced by the Prediction Matrix has two dimensions of detail:

• The number of causes (more are better)

• The number of locations where those causes may be present (more are better)

2. The more transparent a model is, the more vulnerable it is to disproof. Two aspects of the Prediction Matrix are transparent:

• The importance weightings given to each cause

• The relative impact weightings given in each location

What is not yet transparent in the Excel version of the Prediction Matrix, but which could be (via inserted Comments) are:

• The reasons given for different weightings to the causes

• The reasons given for different impact weightings

The limitations of the Prediction Matrix are:

• Explanations given for different cell values may not be based on very clear or substantial arguments and evidence

o This means that choices of cell values should be discussed and debated as much as possible, and well documented. And then exposed to external scrutiny. This is why it best to develop the Prediction Matrix in a workshop setting.

• A good fit between predicted and actual outcomes could be achieved by more than one set of cell values in the matrix. There may be more than one “solution”

o If this is found to be the case, in a particular real life application, then the important question is which of these sets of cell values can be best explained by the available evidence and argument.

When assessing the value of the Prediction Matrix it should be compared to other tools available or usable in the same context for the same purpose, not against an ideal standard that no one can meet.

 

Relationship to other methods

 

1. Realist Evaluation

The structure of a Prediction Matrix can be related to Pawson and Tilley’s concept of Context-Mechanism-Outcome configurations, in their school of Realist Evaluation. The Contexts are the district locations and values given in their cells to what could be called the mediating variables listed in rows 16 to 20. The Mechanism are the interventions (independent variables) listed in rows 11 to 15, and the values given to their cells in each location. The expected Outcome is in row 24.

When I shared  a description of the Prediction Matrix with Nick Tilley in 2006 he commented: “ I enjoyed this. It looks a useful tool. I like the corrigibility [i.e ability to be adjusted and improved]. I can see the fit with what david and I were saying.On a realist front I guess what might emerge are not underlying causal mechanisms but flags for them.

 

2. Qualitative Comparative Analysis (QCA)

This family of methods was developed by Charles Ragin. This method also involves looking a relatively small number of cases and how differences in their attributes relate to differences in observed outcomes. In contrast to the Prediction Matrix QCA matrices simply indicate the presence or absence of an attribute (via a 0 or 1), not its relative importance (via the ranking value). And instead of showing all locations as seperate entries, locations or incidences which have the same attributes are collapsed into one entry, with an additional attribute describing its frequency of occurence.  The process of then identifying the relationship between these different configurations and the presence/absence of the observed outcomes also differs. Through a process of comparison, facilitated by software, one or more combinations of attributes are found which can predict the observed outcomes.

PS: In Using Qualitative Comparative Analysis – (QCA) and Fuzzy Sets. Peer C. Fiss says “QCA is not useful in very small-N situations (e.g. less than 12 cases)” These are the circumstances where ranking is possible. Wendy Olsen says QCA is best for cases between 9 and 200

PS: Fuzzy Set QCA allows cases to have a degree of an attribute, not just an attribute or not.

New DFID policy on Evaluation

“DFID takes very seriously the responsibility to ensure high quality, independent evaluation of its programmes, to provide reliable and robust evidence to improve the value of its global work to reduce poverty.

In December 2007 the Independent Advisory Committee on Development Impact was established to help DFID strengthen its evaluation processes. The Committee is there to work with DFID to:

  • Determine which programmes and areas of UK development assistance will be evaluated and when;
  • Identify any gaps in the planned programme of evaluations and make proposals for new areas or other priorities as required;
  • Determine whether relevant standards (e.g. of the OECD Development Assistance Committee) are being applied; and comment on the overall quality of the programme of evaluation work carried out against these.

DFID and IACDI have therefore been working closely together to define a new policy which will set the course for evaluation in the future. We have also produced a ‘topic list’ of potential areas for evaluation over the coming 3 years. So you will see here two documents on which we would like your feedback, the Draft Evaluation Policy and the Evaluation Topic List.

Central to the policy is the emphasis on greater independence of evaluation, along with stronger partnership working, reflecting global commitments to harmonisation, decentralising evaluation to a greater degree, driving up quality, and ensuring that learning from evaluation contributes to future decision making. We would like you to consider those high level issues when offering your comment and feedback during the time the consultation process is open. This document does not focus on the operational issues; they will be considered in a separate DFID strategy document.

During the consultation period, we would also like to hear your views on which topics you consider to be the greatest priority and why. This will help DFID to make decisions on which are to be given the highest priority.

In summary the issues we are particularly keen for you to focus your feedback on are:

1. The definition of ‘independent evaluation’ – what are your thoughts on the policy approach of DFID, working increasingly with partners, to increase independence in evaluation?

2. What are your views on what’s required to drive up quality across the board in evaluation of international development programmes? What role do you think DFID can most valuably play in this?

3. What are the considerations for DFID strengthening its own evaluation processes, whilst ensuring its commitments to harmonisation remain steadfast?

4. DFID is determined to increase the value of learning from evaluation to inform policy – what are your thoughts on the means to bring this about?

5. DFID is committed to consulting stakeholders during our evaluations, including poor women and men affected by our programmes.   Getting representative stakeholders, especially for evaluations which go beyond specific projects and programmes, can often be challenging (for example evaluations of country assistance plans or thematic evaluations).  Do you have any ideas on how to improve this?

6. DFID is committed to developing evaluation capacity in partner countries and increasing our use of national systems. What are your thoughts on the challenges and ways forward?

Please send your feedback to evaluationfeedback@dfid.gov.uk . The public consultation will officially close on Tuesday 3rd March but we would appreciate comments as early as possible, so that they can be considered as the operational issues are further thought out.”

TrainEval – Training for Evaluation in Development, 3rd course in March-July 2009

Date: 10th March, 2008
Venue: Brussels, Belgium

TrainEval is an advanced training programme for evaluation in development, which has been further adapted to the specific requirements of the European development cooperation and the EC evaluation approach. It has successfully been implemented for the first time since February 2008.

The programme has been developed from experienced trainers and evaluators to respond to the increasing demand for evaluation expertise and its professionalization. It is offering a qualification opportunity in development evaluation for consultants, project and evaluation managers of implementing agencies as well as for representatives from financing agencies. Continue reading “TrainEval – Training for Evaluation in Development, 3rd course in March-July 2009”

One World Trust on Accountability of Research Organisations

The Accountability Principles of Research Organisations (APRO) report provides a framework for establishing accountability good practices and principles for policy-oriented research organisations working in developing countries. It discusses how One World Trust’s core principles of accountability – participation, transparency, evaluation and complaints handling – can be applied to research. In addition to providing arguments for both the ethical and instrumental need for accountability to a wide range of stakeholders, it also acknowledges the tensions and challenges that different organisations will face in formulating accountability principles.

By drawing on the experiences of sixteen research organisations, which reflect the diversity of evidence-producers in developing countries, the study identified a series of key processes common to most research organisations. For each process, it illustrates the opportunities that exist for research organisations to apply the principles of accountability to interactions with their stakeholders.

The next stage of the APRO project will be to work with partner research organisations to develop, refine and test the accountability guidelines.”

%d bloggers like this: