RealWorld Evaluation Working Under Budget, Time, Data, and Political Constraints

Second Edition, by Michael Bamberger, Jim Rugh, Linda Mabry. Sage Publications,  Nov 2011,

This book addresses the challenges of conducting program evaluations in real-world contexts where evaluators and their clients face budget and time constraints and where critical data may be missing. The book is organized around a seven-step model developed by the authors, which has been tested and refined in workshops and in practice. Vignettes and case studies—representing evaluations from a variety of geographic regions and sectors—demonstrate adaptive possibilities for small projects with budgets of a few thousand dollars to large-scale, long-term evaluations of complex programs. The text incorporates quantitative, qualitative, and mixed-method designs, and this Second Edition reflects important developments in the field since the publication of the First Edition. ”

See also the associated website: http://www.realworldevaluation.org/ Bamberger and Rugh have presented many workshops on RealWorld Evaluation in many countries. A copy of various versions and translations of the PowerPoint presentations and other materials are accessible on the next pages of this website.

What’s New in the Second Edition of Real World Evaluation?

  • A greater focus on responsible professional practice, codes of conduct, and the importance of ethical standards for all evaluations.
  • Some new perspectives on the debate over the “best” evaluation designs. While experimental designs can address the important issues of selection bias, such statistical designs are potentially vulnerable to a number of important threats to validity. These include process and contextual analysis, collecting information on sensitive topics and from difficult-to-reach groups, difficulties in adapting to changes in the evaluation design, and implementation strategies. Experience also suggests that strong statistical designs can be applied only in a very small proportion of evaluations.
  • There are many instances in which well-designed nonexperimental designs will be the best option for assessing outcomes of many programs, particularly for evaluating complex programs and even “simple” programs that involve complex processes of behavioral change.
  • The importance of understanding the setting within which the evaluation is designed, implemented, and used.
  • Program theory as a central building block of most evaluation designs. The expanded discussion incorporates theory of change, contextual and process analysis, multilevel logic models, using competing theories, and trajectory analysis.
  • The range of evaluation design options has been considerably expanded, and case studies are included to illustrate how each of the 19 designs has been applied in the field.
  • Greater emphasis is given to the benefits of mixed-method evaluation designs.
  • A new chapter has been added on the evaluation of complicated and complex development interventions. Conventional pretest-posttest comparison group designs can rarely be applied to the increasing proportion of development assistance channeled through complex interventions, and a range of promising new approaches—still very much “work in progress”—is presented.
  • Two new chapters on organizing and managing evaluations and strengthening evaluation capacity. This includes a discussion of strategies for promoting the institutionalization of evaluation systems at the sector and national levels.
  • The discussion of quality assurance and threats to validity has been expanded, and checklists and worksheets are included on how to assess the validity of quantitative, qualitative, and mixed-method designs.

Comments?

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: