The Value of Evaluation: Tools for Budgeting and Valuing Evaluations

Barr, J., Rinnert, D., Lloyd, R., Dunne, D., & Hentinnen, A. (2016, August). The Value of Evaluation: Tools for Budgeting and Valuing Evaluations Research for Development Output – GOV.UK. ITAF & DFID.

 

Exec Summary (first part): “DFID has been at the forefront of supporting the generation of evidence to meet the increasing demand for knowledge and evidence about what works in international development. Monitoring and evaluation have become established tools for donor agencies and other actors to demonstrate accountability and to learn. At the same time, the need to demonstrate the impact and value of evaluation activities has also increased. However, there is currently no systematic approach to valuing the benefits of an evaluation, whether at the individual or at the portfolio level.

 

This paper argues that the value proposition of evaluations for DFID is context-specific, but that it is closely linked to the use of the evaluation and the benefits conferred to stakeholders by the use of the evidence that the evaluation provides. Although it may not always be possible to quantify and monetise this value, it should always be possible to identify and articulate it.

 

In the simplest terms, the cost of an evaluation should be proportionate to the value that an evaluation is expected to generate. This means that it is important to be clear about the rationale, purpose and intended use of an evaluation before investing in one. To provide accountability for evaluation activity, decision makers are also interested to know whether an evaluation was ‘worth it’ after it has been completed. Namely, did the investment in the evaluation generate information that is in itself more valuable and useful than using the
funds for another purpose.

 

Against this background, this paper has been commissioned by DFID to answer two main questions:

1. What different methods and approaches can be used to estimate the value of evaluations before commissioning decisions are taken and what tools and approaches are available to assess the value of an already concluded evaluation?

 

2. How can these approaches be simplified and merged into a practical framework that can be applied and further developed by evaluation commissioners to make evidence-based decisions about whether and how to evaluate before commissioning and contracting?”

 

Rick Davies comment: The points I  noted/highlighted…
  • “…there is surprisingly little empirical evidence available to demonstrate the benefits of evaluation, or to show they can be estimated” …”Evidence’ is thus usually seen as axiomatically ‘a good thing’”
  • “A National Audit Office (NAO) review (2013) of evaluation in government was critical across its sample of departments – it found that: “There is little systematic information from the government on how it has used the evaluation evidence that it has commissioned or produced”.
  • “…there is currently no systematic approach to valuing the benefits of an evaluation, whether at the individual or at the portfolio level”
  • “…most ex-ante techniques may be too time-consuming for evaluation commissioners, including DFID, to use routinely”
  • ” The concept of ‘value’ of evaluations is linked to whether and how the knowledge generated during or from an evaluation will be used and by whom.”

 

The paper proposes that:

  • “Consider selecting a sample of evaluations for ex-post valuation within any given reporting period” Earlier it notes that “”…a growing body of ex–post valuation of evaluations at the portfolio level, and their synthesis, will build an evidence base to inform evaluation planning and create a feedback loop that informs learning about commissioning more valuable evaluations”
  • “Qualitative approaches that include questionnaires and self-evaluation may offer some merits for commissioners in setting up guidance to standardise the way ongoing and ex-post information is collected on evaluations for ex-post assessment of the benefits of evaluations.”
  • “Consider using a case study template for valuing DFID evaluations”
  • “An ex-ante valuation framework is included in this paper (see section 4)  which incorporates information from the examination of the above techniques and recommendations. Commissioners could use this framework to develop a tool, to assess the potential benefit of evaluations to be commissioned”

 

While I agree with all of these…

  • The is already a body of empirically-oriented literature on evaluation use dating back to the 1980s that should be given adequate attention. See my probably incomplete bibliography here. This includes a very recent 2016 study by USAID.
  • The use of case studies the kind used by the Research Excellence Framework (REF), known as Impact Case Studies’ makes sense. As this paper noted “. The impact case studies do not need to be representative of the spread of research activity in the unit rather they should provide the strongest examples of impact” They are in, other words, a kind of “Most Significant Change” story, including the MSC type requirement that there be “a list of sufficient sources that could, if audited, corroborate key claims made about the impact of the research”  Evaluation use is not a kind of outcome where it seems to make much sense investing a lot of effort into establishing “average affects”. Per unit of money invested it would seem to make more sense searching for the most significant changes (both positive and negative) that people perceive as the effects of an evaluation
  • The ex-ante valuation framework is in effect a “loose” Theory of Change“, which needs to be put in use and then tested for its predictive value! Interpreted in crude terms, presumably the more of the criteria listed in the Evaluation Decision Framework (on page 26) are met by a given evaluation the higher our expectations are that the evaluation will be used and have an impact. There are stacks of normative frameworks around telling us how to do things, e.g. on how to have effective partnerships. However, good ideas like these need to disciplined by some effort to test them against what happens in reality.

Comments?

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: