OPM’s approach to assessing Value for Money

by Julian King, Oxford Policy Management. January 2018. Available as pdf

Excerpt from Foreword:

In 2016, Oxford Policy Management (OPM) teamed up with Julian King, an evaluation specialist, who worked with staff from across the company to develop the basis of a robust and distinct OPM approach to assessing VfM. The methodology was successfully piloted during the annual reviews of the Department for International Development’s (DFID) Sub-National Governance programme in Pakistan and MUVA, a women’s economic empowerment programme in Mozambique. The approach involves making transparent, evidence-based judgements about how well resources are being used, and whether the value derived is good enough to justify the investment.

To date, we have applied this approach on upwards of a dozen different development projects and programmes, spanning a range of clients, countries, sectors, and budgets. It has been well received by our clients (both funding agencies and partner governments) and project teams alike, who in particular appreciate the use of explicit evaluative reasoning. This involves developing definitions of what acceptable / good / excellent VfM looks like, in the context of each specific project. Critically, these definitions are co-developed and endorsed upfront, in advance of implementation and before the evidence is gathered, which provides an agreed, objective, and transparent basis for making judgements.

Table of contents
Foreword 1
Acknowledgements 3
Executive summary 4
1 Background 7
1.1 What is VfM? 7
1.2 Why evaluate VfM? 7
1.3 Context 8
1.4 Overview of this guide 8
2 Conceptual framework for VfM assessment 9
2.1 Explicit evaluative reasoning 9
2.2 VfM criteria 10
2.3 DFID’s VfM criteria: the Four E’s 10
2.4 Limitations of the Four E’s 12
2.5 Defining criteria and standards for the Four E’s 13
2.6 Mixed methods evidence 15
2.7 Economic analysis 15
2.8 Complexity and emergent strategy 17
2.9 Perspective matters in VfM 19
2.10 Integration with M&E frameworks 19
2.11 Timing of VfM assessments 20
2.12 Level of effort required 20
3 Designing and implementing a VfM framework 21
3.1 Step 1: theory of change 21
3.2 Steps 2 and 3: VfM criteria and standards 22
3.3 Step 4: identifying evidence required 26
3.4 Step 5: gathering evidence 26
3.5 Steps 6–7: analysis, synthesis, and judgements 27
3.6 Step 8: reporting 29
Bibliography 30
Contact us

Review by Dr E. Jane Davidson, author of Evaluation Methodology Basics (Sage, 2005) and Director of Real Evaluation LLC, Seattle

Finally, an approach to Value for Money that breaks free of the “here’s the formula” approach and instead emphasises the importance of thoughtful and well-evidenced evaluative reasoning. Combining an equity lens with insights and perspectives from diverse stakeholders helps us understand the value of different constellations of outcomes relative to the efforts and investments required to achieve them. This step-by-step guide helps decision makers figure out how to answer the VfM question in an intelligent way when some of the most valuable outcomes may be the hardest to measure – as they so often are.


Bit by Bit Social Research in the Digital Age

by Matthew J. Salganik, Princeton University Press, 2017

Very positive reviews by…

Selected quotes:

“Overall, the book relies on a repeated narrative device, imagining how a social scientist and a data scientist might approach the same research opportunity. Salganik suggests that where data scientists are glass-half-full people and see opportunities, social scientists are quicker to highlight problems (the glass-half-empty camp). He is also upfront about how he has chosen to write the book, adopting the more optimistic view of the data scientist, while holding on to the caution expressed by social scientists”

“Salganik argues that data scientists most often work with “readymades”, social scientists with “custommades”, illustrating the point through art: data scientists are more like Marcel Duchamp, using existing objects to make art; meanwhile, social scientists operate in the custom-made style of Michelangelo, which offers a neat fit between research questions and data, but does not scale well. The book is thus a call to arms, to encourage more interdisciplinary research and for both sides to see the potential merits and drawbacks of each approach. It will be particularly welcome to researchers who have already started to think along similar lines, of which I suspect there are many”

Illustrates important ideas with examples of outstanding research

Combines ideas from social science and data science in an accessible style and without jargon

Goes beyond the analysis of “found” data to discuss the collection of “designed” data such as surveys, experiments, and mass collaboration

Features an entire chapter on ethics

Includes extensive suggestions for further reading and activities for the classroom or self-study

Matthew J. Salganik is professor of sociology at Princeton University, where he is also affiliated with the Center for Information Technology Policy and the Center for Statistics and Machine Learning. His research has been funded by Microsoft, Facebook, and Google, and has been featured on NPR and in such publications as the New Yorker, the New York Times, and the Wall Street Journal.


1 Introduction
2 Observing Behavior
3 Asking Questions
4 Running Experiments
5 Creating Mass Collaboration
6 Ethics
7 The Future

More detailed contents page available via Amazon Look Inside

PS: See also this Vimeo video presentation by Salganik: Wiki Surveys – Open and Quantifiable Social Data Collection plus this PLOS paper on the same topic.


Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor.

Virginia Eubanks, (2018), New York, NY: St. Martin’s Press

Unfortunately, a contents list does not seem to be available online.  But here is a  lengthy excerpt from the book.

And here is a YouTube interview with the author, a University of Albany political scientist Virginia Eubanks discusses her new book “Automating Inequality: How High Tech Tools Profile, Police, and Punish the Poor.” (Taped: 12/05/2017)


Impact Evaluation of Development Interventions A Practical Guide

by  Howard White David A. Raitzer. Published by Asian Development Bank. 2017. Available as a pdf (3.12Mb)

The publisher says “This book offers guidance on the principles, methods, and practice of impact evaluation. It contains material for a range of audiences, from those who may use or manage impact evaluations to applied researchers”

“Impact evaluation is an empirical approach to estimating the causal effects of interventions, in terms of both magnitude and statistical significance. Expanded use of impact evaluation techniques is critical to rigorously derive knowledge from development operations and for development investments and policies to become more evidence-based and effective. To help backstop more use of impact evaluation approaches, this book introduces core concepts, methods, and considerations for planning, designing, managing, and implementing impact evaluation, supplemented by examples. The topics covered range from impact evaluation purposes to basic principles, specific methodologies, and guidance on field implementation. It has materials for a range of audiences, from those who are interested in understanding evidence on “what works” in development, to those who will contribute to expanding the evidence base as applied researchers.”


  • Introduction: Impact Evaluation for Evidence-Based Development
  • Using Theories of Change to Identify Impact Evaluation Questions
  • The Core Concepts of Impact Evaluation
  • Randomized Controlled Trials
  • Nonexperimental Designs
  • What and How to Measure: Data Collection for Impact Evaluation
  • Sample Size Determination for Data Collection
  • Managing the Impact Evaluation Process
  • Appendixes

Rick Davies’ comments: I have only scanned, not read, this book. But some of the sections that I found of interest included:

  • 3.4 Time Dimension of Impacts…not always covered, but very important when planning the timing of evaluations of any kind
  • Page 2: “Impact evaluations are empirical studies that quantify the causal effects of interventions on outcomes of interest” I am surprised that the word “explain” is not also included in this definition. Or perhaps it is an intentionally minimalist definition, and omission does not mean it has to be ignored
  • Page 23 on the Funnel of Attribution, which I would like to see presented in the form of overlapping sets
  • There could be better acknowledgment by referencing of other sources e.g to Outcome Mapping (p25, re behavioral change) and Realist Evaluation (p41)
  • Good explanations of the technical terms used, on page 42 and 44 for example
  • Overcoming resistance to RCTs (p59) and 10 things that can go wrong with RCTs (p61)
  • The whole of chapter 6 on data collection
  • and lots more…

The Tyranny of Metrics

The Tyranny of Metrics, by Jerry Z Muller, Princeton University Press, RRP£19.95/ $24.95, 240 pages

See Tim Harford’s review of this book in the Financial Times, 24, January 2018

Some quotes: Muller shows that metrics are often used as a substitute for relevant experience, by managers with generic rather than specific expertise. Muller does not claim that metrics are always useless, but that we expect too much from them as a tool of management. ….

The Tyranny of Metrics does us a service in briskly pulling together parallel arguments from economics, management science, philosophy and psychology along with examples from education, policing, medicine, business and the military.

 In an excellent final chapter, Muller summarises his argument thus: “measurement is not an alternative to judgement: measurement demands judgement: judgement about whether to measure, what to measure, how to evaluate the significance of what’s been measured, whether rewards and penalties will be attached to the results, and to whom to make the measurements available”. 

 The book does not engage seriously enough with the possibility that the advantages of metric-driven accountability might outweigh the undoubted downsides. Tellingly, Muller complains of a university ratings metric that rewards high graduation rates, access for disadvantaged students, and low costs. He says these requirements are “mutually exclusive”, but they are not. They are in tension with each other,

Nor does this book reckon with evidence that mechanical statistical predictions often beat the subjective judgment of experts.

…and perhaps most curiously, there is no discussion of computers, cheap sensors, or big data. In this respect, at least, the book could have been written in the 1980s.

Table of Contents

Introduction 1
1 The Argument in a Nutshell 17
2 Recurring Flaws 23
3 The Origins of Measuring and Paying for Performance 29
4 Why Metrics Became So Popular 39
5 Principals, Agents, and Motivation 49
6 Philosophical Critiques 59
7 Colleges and Universities 67
8 Schools 89
9 Medicine 103
10 Policing 125
11 The Military 131
12 Business and Finance 137
13 Philanthropy and Foreign Aid 153
14 When Transparency Is the Enemy of Performance: Politics, Diplomacy, Intelligence, and Marriage 159
15 Unintended but Predictable Negative Consequences 169
16 When and How to Use Metrics: A Checklist 175
Acknowledgments 185
Notes 189
Index 213

Search inside this book using a Google Books view

Analyzing Social Networks

To be published in Jan 2018. SECOND EDITION. Published by Sage
Stephen P Borgatti – University of Kentucky, USA
Martin G Everett – Manchester University, UK
Jeffrey C Johnson – University of Florida, USA

Publishers blurb: “Designed to walk beginners through core aspects of collecting, visualizing, analyzing, and interpreting social network data, this book will get you up-to-speed on the theory and skills you need to conduct social network analysis. Using simple language and equations, the authors provide expert, clear insight into every step of the research process—including basic maths principles—without making assumptions about what you know. With a particular focus on NetDraw and UCINET, the book introduces relevant software tools step-by-step in an easy to follow way.

In addition to the fundamentals of network analysis and the research process, this new Second Edition focuses on:

  • Digital data and social networks like Twitter
  • Statistical models to use in SNA, like QAP and ERGM
  • The structure and centrality of networks
  • Methods for cohesive subgroups/community detection
  • Supported by new chapter exercises, a glossary, and a fully updated companion website, this text is the perfect student-friendly introduction to social network analysis.”

Detailed contents list here


The Ethics of Influence: Government in the Age of Behavioral Science

by Cass R. Sunstein, Cambridge University Press, 2016


1. The age of behavioral science;
2. Choice and its architecture;
3. ‘As judged by themselves’;
4. Values;
5. Fifty shades of manipulation;
6. Do people like nudges? Empirical findings;
7. Green by default? Ethical challenges for environmental protection;
8. Mandates – a very brief recapitulation;
Appendix A. American attitudes toward thirty-four nudges;
Appendix B. Survey questions;
Appendix C. Executive Order 13707: using behavioral science insights to better serve the American people;

Amazon blurb: “In recent years, ‘nudge units’ or ‘behavioral insights teams’ have been created in the United States, the United Kingdom, Germany, and other nations. All over the world, public officials are using the behavioral sciences to protect the environment, promote employment and economic growth, reduce poverty, and increase national security. In this book, Cass R. Sunstein, the eminent legal scholar and best-selling co-author of Nudge (2008), breaks new ground with a deep yet highly readable investigation into the ethical issues surrounding nudges, choice architecture, and mandates, addressing such issues as welfare, autonomy, self-government, dignity, manipulation, and the constraints and responsibilities of an ethical state. Complementing the ethical discussion, The Ethics of Influence: Government in the Age of Behavioral Science contains a wealth of new data on people’s attitudes towards a broad range of nudges, choice architecture, and mandates.

Book Review by Roger Frantz (pdf)

Norms in the Wild: How to Diagnose, Measure, and Change Social Norms

Cristina Bicchieri, Oxford University Press, 2016. View Table of Contents

Publisher summary:

  1. Presents evidence-based assessment tools for assessing and intervening on various social behaviors
  2. Illustrates the role of mass media and autonomous “first movers” as the forefront of wide-scale behavioral change
  3. Provides dichotomous models for assessing normative behaviors
  4. Explains why well-tested interventions sometimes fail to change behavior


Amazon blurb: “The philosopher Cristina Bicchieri here develops her theory of social norms, most recently explained in her 2006 volume The Grammar of Society. Bicchieri challenges many of the fundamental assumptions of the social sciences. She argues that when it comes to human behavior, social scientists place too much stress on rational deliberation. In fact, many choices occur without much deliberation at all. Bicchieri’s theory accounts for these automatic components of behavior, where individuals react automatically to cues–those cues often pointing to the social norms that govern our choices in a social world

Bicchieri’s work has broad implications not only for understanding human behavior, but for changing it for better outcomes. People have a strong conditional preference for following social norms, but that also means manipulating those norms (and the underlying social expectations) can produce beneficial behavioral changes. Bicchieri’s recent work with UNICEF has explored the applicability of her views to issues of human rights and well-being. Is it possible to change social expectations around forced marriage, genital mutilations, and public health practices like vaccinations and sanitation? If so, how? What tools might we use? This short book explores how social norms work, and how changing them–changing preferences, beliefs, and especially social expectations–can potentially improve lives all around the world.”



How to Measure Anything: Finding the Value of Intangibles in Business [and elsewhere]

3rd Edition by Douglas W. Hubbard (Author)

pdf copy of 2nd edition available here

Building up from simple concepts to illustrate the hands-on yet intuitively easy application of advanced statistical techniques, How to Measure Anything reveals the power of measurement in our understanding of business and the world at large. This insightful and engaging book shows you how to measure those things in your business that until now you may have considered “immeasurable,” including technology ROI, organizational flexibility, customer satisfaction, and technology risk.

Offering examples that will get you to attempt measurements-even when it seems impossible-this book provides you with the substantive steps for measuring anything, especially uncertainty and risk. Don’t wait-listen to this book and find out:

  • The three reasons why things may seem immeasurable but are not
  • Inspirational examples of where seemingly impossible measurements were resolved with surprisingly simple methods
  • How computing the value of information will show that you probably have been measuring all the wrong things
  • How not to measure risk
  • Methods for measuring “soft” things like happiness, satisfaction, quality, and more

Amazon.com Review Now updated with new research and even more intuitive explanations, a demystifying explanation of how managers can inform themselves to make less risky, more profitable business decisions This insightful and eloquent book will show you how to measure those things in your own business that, until now, you may have considered “immeasurable,” including customer satisfaction, organizational flexibility, technology risk, and technology ROI.

  • Adds even more intuitive explanations of powerful measurement methods and shows how they can be applied to areas such as risk management and customer satisfaction
  • Continues to boldly assert that any perception of “immeasurability” is based on certain popular misconceptions about measurement and measurement methods
  • Shows the common reasoning for calling something immeasurable, and sets out to correct those ideas
  • Offers practical methods for measuring a variety of “intangibles”
  • Adds recent research, especially in regards to methods that seem like measurement, but are in fact a kind of “placebo effect” for management – and explains how to tell effective methods from management mythology
  • Written by recognized expert Douglas Hubbard-creator of Applied Information Economics

How to Measure Anything, Second Edition illustrates how the author has used his approach across various industries and how any problem, no matter how difficult, ill defined, or uncertain can lend itself to measurement using proven methods.

See also Julia Galef’s podcast interview with the author: 



Monitoring and Evaluation in Health and Social Development: Interpretive and Social Development Perspectives

Edited by Stephen Bell and Peter Aggleton. Routledge 2016. View on Google Books

interpretive researchers thus attempt to understand phenomena through accessing the meanings participants assign to them

“...interpretive and ethnographic approaches are side-lined in much contemporary evaluation work and current monitoring and evaluation practice remains heavily influenced by more positivist approaches

attribution is not the only purpose of impact evaluation

Lack of familiarity with qualitative approaches by programme staff and donor agencies also influences the preferences for for quantitative methods in monitoring and evaluation work


1. Interpretive and Ethnographic Perspectives – Alternative Approaches to Monitoring and Evaluation Practice

2. The Political Economy of Evidence: Personal Reflections on the Value of the Interpretive Tradition and its Methods

3. Measurement, Modification and Transferability: Evidential Challenges in the Evaluation of Complex Interventions

4. What Really Works? Understanding the Role of ‘Local Knowledges’ in the Monitoring and Evaluation of a Maternal, Newborn and Child Health Project in Kenya

PART 2: Programme Design 5. Permissions, Vacations and Periods of Self-regulation: Using Consumer Insight to Improve HIV Treatment Adherence in Four Central American Countries

6. Generating Local Knowledge: A Role for Ethnography in Evidence-based Programme Design for Social Development

7. Interpretation, Context and Time: An Ethnographically Inspired Approach to Strategy Development for Tuberculosis Control in Odisha, India

8. Designing Health and Leadership Programmes for Young Vulnerable Women Using Participatory Ethnographic Research in Freetown, Sierra Leone

Part 3: Monitoring Processes

9. Using Social Mapping Techniques to Guide Programme Redesign in the Tingim Laip HIV Prevention and Care Project in Papua New Guinea

10. Pathways to Impact: New Approaches to Monitoring and Improving Volunteering for Sustainable Environmental Management

11. Ethnographic Process Evaluation: A Case Study of an HIV Prevention Programme with Injecting Drug Users in the USA

12. Using the Reality Check Approach to Shape Quantitative Findings: Experience from Mixed Method Evaluations in Ghana and Nepal

Part 4: Understanding Impact and Change

13. Innovation in Evaluation: Using SenseMaker to Assess the Inclusion of Smallholder Farmers in Modern Markets

14. The Use of the Rapid PEER Approach for the Evaluation of Sexual and Reproductive Health Programmes

15. Using Interpretive Research to Make Quantitative Evaluation More Effective: Oxfam’s Experience in Pakistan and Zimbabwe

16. Can Qualitative Research Rigorously Evaluate Programme Impact? Evidence from a Randomised Controlled Trial of an Adolescent Sexual Health Programme in Tanzania

Rick Davies Comment: [Though this may reflect my reading biases…]It seems like this strand of thinking has not been in the forefront of M&E attention for a long time (i.e. maybe since the 1990s – early 2000’s) so it is good to see this new collection of papers, by a large collection of both old and new faces (33 in all).