In this short and readable paper Michael Scriven addresses “three categories of issues that arise about meta-evaluation: (i) exactly what is it; (ii) how is it justified; (iii) when and how should it be used? In the following, I say something about all three—definition, justification, and application.” He then makes seven main points, each of which he elaborates on in some detail:
- Meta-evaluation is the consultant’s version of peer review.
- Meta-evaluation is the proof that evaluators believe what they say.
- In meta-evaluation, as in all evaluation, check the pulse before trimming the nails.
- A partial meta-evaluation is better than none.
- Make the most of meta-evaluation.
- Any systematic approach to evaluation—in other words, almost any kind of professional evaluation—automatically provides a systematic basis for meta-evaluation.
- Fundamentally, meta-evaluation, like evaluation, is simply an extension of common sense—and that’s the first defense to use against the suggestion that it’s some kind of fancy academic embellishment.