funds for another purpose.
Against this background, this paper has been commissioned by DFID to answer two main questions:
1. What different methods and approaches can be used to estimate the value of evaluations before commissioning decisions are taken and what tools and approaches are available to assess the value of an already concluded evaluation?
2. How can these approaches be simplified and merged into a practical framework that can be applied and further developed by evaluation commissioners to make evidence-based decisions about whether and how to evaluate before commissioning and contracting?”
- “…there is surprisingly little empirical evidence available to demonstrate the benefits of evaluation, or to show they can be estimated” …”Evidence’ is thus usually seen as axiomatically ‘a good thing’”
- “A National Audit Office (NAO) review (2013) of evaluation in government was critical across its sample of departments – it found that: “There is little systematic information from the government on how it has used the evaluation evidence that it has commissioned or produced”.
- “…there is currently no systematic approach to valuing the benefits of an evaluation, whether at the individual or at the portfolio level”
- “…most ex-ante techniques may be too time-consuming for evaluation commissioners, including DFID, to use routinely”
- ” The concept of ‘value’ of evaluations is linked to whether and how the knowledge generated during or from an evaluation will be used and by whom.”
The paper proposes that:
- “Consider selecting a sample of evaluations for ex-post valuation within any given reporting period” Earlier it notes that “”…a growing body of ex–post valuation of evaluations at the portfolio level, and their synthesis, will build an evidence base to inform evaluation planning and create a feedback loop that informs learning about commissioning more valuable evaluations”
- “Qualitative approaches that include questionnaires and self-evaluation may offer some merits for commissioners in setting up guidance to standardise the way ongoing and ex-post information is collected on evaluations for ex-post assessment of the benefits of evaluations.”
- “Consider using a case study template for valuing DFID evaluations”
- “An ex-ante valuation framework is included in this paper (see section 4) which incorporates information from the examination of the above techniques and recommendations. Commissioners could use this framework to develop a tool, to assess the potential benefit of evaluations to be commissioned”
While I agree with all of these…
- The is already a body of empirically-oriented literature on evaluation use dating back to the 1980s that should be given adequate attention. See my probably incomplete bibliography here. This includes a very recent 2016 study by USAID.
- The use of case studies the kind used by the Research Excellence Framework (REF), known as Impact Case Studies’ makes sense. As this paper noted “. The impact case studies do not need to be representative of the spread of research activity in the unit rather they should provide the strongest examples of impact” They are in, other words, a kind of “Most Significant Change” story, including the MSC type requirement that there be “a list of sufficient sources that could, if audited, corroborate key claims made about the impact of the research” Evaluation use is not a kind of outcome where it seems to make much sense investing a lot of effort into establishing “average affects”. Per unit of money invested it would seem to make more sense searching for the most significant changes (both positive and negative) that people perceive as the effects of an evaluation
- The ex-ante valuation framework is in effect a “loose” Theory of Change“, which needs to be put in use and then tested for its predictive value! Interpreted in crude terms, presumably the more of the criteria listed in the Evaluation Decision Framework (on page 26) are met by a given evaluation the higher our expectations are that the evaluation will be used and have an impact. There are stacks of normative frameworks around telling us how to do things, e.g. on how to have effective partnerships. However, good ideas like these need to disciplined by some effort to test them against what happens in reality.