Methodological Briefs. Impact Evaluation No. 5 by Irene Guijt (and found via the Better Evaluation website). Available as pdf.
“This guide, written by Irene Guijt for UNICEF, looks at the use of participatory approaches in impact evaluation…..By asking the question, ‘Who should be involved, why and how?’ for each step of an impact evaluation, an appropriate and context-specific participatory approach can be developed”
Contents
- Participatory approaches: a brief description
- When is it appropriate to use this method?
- How to make the most of participatory approaches
- Ethical concerns
- Which other methods work well with this one?
- Participation in analysis and feedback of results
- Examples of good practices and challenges
Rick Davies comment: I like the pluralist approach this paper takes towards the use of participatory approaches. It is practically oriented rather than driven by a ideological type of belief that peoples participation must always be maximised. That said, I did find Table 1 “Types of participation by programme participants in impact evaluation” out of place, because it was a typology with a very simple linear scale with fairly obvious indications of not only what kinds of participation are possible,but which ones are more desirable. On the other hand I thought Box 3 was really useful, because it spelled out a number of useful questions to ask about possible forms of participation at each stage of the evaluation design, implementation and review process. It is worth noting that given the 22 questions, and assuming for arguments sake they each had binary answers, this means there are at least 2 to the power of 22 different types of ways of building participation into an evaluation i.e 4,194,304 ways! That seems a bit closer to reality to me, relative to the earlier classification of four types in Table 1
I think the one area here where I would like more detail and examples is on participatory approaches to the analysis of data. Not the collection of data, but its analysis. There is some discussion on page 11 about causality, which would be great to see further developed. I often feel that this is an area of participatory practice where a yellow post-it note might as well placed, saying “here a miracle occurs”
Thanks for sharing this resource, Rick.
I agree that Table 1 implies a kind of either/or type of choice for a certain degree of participation. My intention with that table is to point out that what some people consider participation – simply getting data from intended beneficiaries – represents a limited interpretation with much more possible. I wanted to help people think more about the difference of doing evaluation of or on people and doing evaluation with them. I wrote in the paper that these different interpretations: “…. highlight the importance of clarifying how terms such as ‘participation’ and ‘involvement’ are defined. It can help to avoid situations when consultation of programme participants’ opinions is assumed to be empowering simply because it is labelled ‘participatory’ or situations when commitments are made without making appropriate resources available.”
I completely agree on participation in analysis as needing much more attention from the field of evaluation. It is a critical area where we need to invest and experiment and share much more. Hence the upcoming session early September at AES in Melbourne by Judy Oakden, Kate McKegg and myself on collective sense making.
Hear hear on the analysis gap… we’ve been trying to simplify that ‘miracle’ in our Assessing Rural Transformations Project. We’ve developed a simpler and faster way of dealing with qualitative data in Excel which enables you to get at the ‘attribution question’, identifying key drivers of change by the frequency with which they were cited. Using Excel makes it cheap and easy to access for all practitioners. Examples of the tables produced as part of the Qualitative Impact Assessment Protocol (QUIP) analysis can be found here: http://www.bath.ac.uk/cds/documents/quip-briefing-paper-march-2015.pdf