Beneficiary Feedback in Evaluation

Produced for DFID Evaluation Department by Lesley Groves, February 2015. Available as a pdf

The purpose of this paper is to analyse current practice of beneficiary feedback in evaluation and to stimulate further thinking and activity in this area. The Terms of Reference required a review of practice within DFID and externally. This is not a practical guide or How to Note, though it does make some recommendations on how to improve the practice of beneficiary feedback in evaluation. The paper builds on current UK commitments to increasing the voice and influence of beneficiaries in aid programmes. It has been commissioned by the Evaluation Department of the UK Department for International Development (DFID).

Evidence base

The paper builds on:

  • A review of over 130 documents (DFID and other development agencies), including policy and practice reports, evaluations and their Terms of Reference, web pages, blogs, journal
    articles and books;
  • Interviews with 36 key informants representing DFID, INGOs, evaluation consultants/consultancy firms and a focus group with 13 members of the Beneficiary
    Feedback Learning Partnership;
  • Contributions from 33 practitioners via email and through a blog set up for the purpose of this research (https://beneficiaryfeedbackinevaluationandresearch.wordpress.com/) and;
  • Analysis of 32 evaluations containing examples of different types of beneficiary feedback.

It is important to note that the research process revealed that the literature on beneficiary feedback in evaluation is scant. Yet, the research process revealed that there is a strong appetite for developing a shared understanding and building on existing, limited practice.

Contents
Executive Summary
Introduction
Part A: A Framework for a Beneficiary Feedback Approach to Evaluation
A.1 Drawing a line in the sand: defining beneficiary feedback in the context of evaluation
A.1.1 Current use of the term “beneficiary feedback”
A.1.2 Defining “Beneficiary”
A.1.3 Defining “Feedback
A.2 Towards a framework for applying a “beneficiary feedback” approach in the context of evaluation
A.3 A working definition of beneficiary feedback in evaluation
Part B: Situating Beneficiary Feedback in Current Evaluation Practice
B.1 Situating beneficiary feedback in evaluation within DFID systems and evaluation standards
B.1.1 Applying a beneficiary feedback approach to evaluation within DFID evaluations
B.1.2 Inclusion of beneficiary feedback in evaluation policies, standards and principles
B.2 Learning from experience: Assessment of current practice
B.2.1 Existing analysis of current performance of beneficiary feedback in the development sector generally
B.2.2 Specific examples of beneficiary feedback in evaluation
Part C: Enhancing Evaluation Practice through a Beneficiary Feedback Approach
C.1 How a beneficiary feedback approach can enhance evaluation practice
C.2 Checklists for evaluation commissioners and practitioners
C.3 What are the obstacles to beneficiary feedback in evaluation and how can they

Postscript 2015 05 See also the associated checklists on this blog page:  Downloadable Checklist for Commissioners and Evaluators

Rick Davies Comment: I am keen on the development and use of checklists, for a number of reasons. They encourage systematic attention to a range of relevant issues and make lack of attention to  any of these more visible and accountable. But I also like Scriven’s comments on checklists:

“The humble checklist, while no one would deny its utility in evaluation and elsewhere, is usually
thought to fall somewhat below the entry level of what we call a methodology, let alone a theory.
But many checklists used in evaluation incorporate a quite complex theory, or at least a set of
assumptions, which we are well advised to uncover; and the process of validating an evaluative
checklist is a task calling for considerable sophistication. Indeed, while the theory underlying a
checklist is less ambitious than the kind that we normally call a program theory, it is often all the
theory we need for an evaluation”

Scriven’s comments prompt me to ask, in the case of Lesley Grove’s checklists, if the attributes listed in the checklists are what we ideally should find in an evaluation, and many or all are in fact found to be present, then what outcome(s) might we then expect to see associated with these features of an evaluation? On page 23 of her report she lists four possible desirable outcomes:

  • Generation of more robust and rigorous evaluations particularly to ensure unintended and
    negative consequences are understood;
  • Reduction of participation fatigue and beneficiary burden through processes that respect
    participants and enable them to engage in meaningful ways;
  • Supporting of development and human rights outcomes;
  • Making programmes more relevant and responsive

With this list we are on our way to  having a testable theory of how beneficiary feedback can improve evaluations.

The same chapter of the report goes even further, identifying the different types of outcomes that could be expected from different combinations of usages of beneficiary feedback, in a four by four matrix (see page 27).

Comments?

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: