Navigation by Judgment: Why and When Top Down Management of Foreign Aid Doesn’t Work

Errors arising from too much or too little control can be seen or unseen. When control is too little, errors are more likely to be seen. People do things they should not have done. When control is too much, errors are likely to be unseen, people don’t do things they should have done. Given this asymmetry, and other things being equal, there is a bias towards too much control

Honig, Dan. 2018. Navigation by Judgment: Why and When Top Down Management of Foreign Aid Doesn’t Work. Oxford, New York: Oxford University Press.

Contents

Preface
Acknowledgments
Part I: The What, Why, and When of Navigation by Judgment
Chapter 1. Introduction – The Management of Foreign Aid
Chapter 2. When to Let Go: The Costs and Benefits of Navigation by Judgment
Chapter 3. Agents – Who Does the Judging?
Chapter 4. Authorizing Environments & the Perils of Legitimacy Seeking
Part II: How Does Navigation by Judgment Fare in Practice?
Chapter 5. How to Know What Works Better, When: Data, Methods, and Empirical Operationalization
Chapter 6. Journey Without Maps – Environmental Unpredictability and Navigation Strategy
Chapter 7. Tailoring Management to Suit the Task – Project Verifiability and Navigation Strategy
Part III: Implications
Chapter 8. Delegation and Control Revisited
Chapter 9. Conclusion – Implications for the Aid Industry & Beyond
Appendices
Appendix I: Data Collection
Appendix II: Additional Econometric Analysis
Bibliography

YouTube presentation by the author: https://www.youtube.com/watch?reload=9&v=bdjeoBFY9Ss

Snippet from video: Errors arising from too much or too little control can be seen or unseen. When control is too little, errors are more likely to be seen. People do things they should not have done. When control is too much, errors are likely to be unseen, people don’t do things they should have done. Given this asymmetry, and other things being equal, there is a bias towards too much control

Book review: By Duncan Green in his 2018 From Poverty to Power blog

Publishers blurb:

Foreign aid organizations collectively spend hundreds of billions of dollars annually, with mixed results. Part of the problem in these endeavors lies in their execution. When should foreign aid organizations empower actors on the front lines of delivery to guide aid interventions, and when should distant headquarters lead?

In Navigation by Judgment, Dan Honig argues that high-quality implementation of foreign aid programs often requires contextual information that cannot be seen by those in distant headquarters. Tight controls and a focus on reaching pre-set measurable targets often prevent front-line workers from using skill, local knowledge, and creativity to solve problems in ways that maximize the impact of foreign aid. Drawing on a novel database of over 14,000 discrete development projects across nine aid agencies and eight paired case studies of development projects, Honig concludes that aid agencies will often benefit from giving field agents the authority to use their own judgments to guide aid delivery. This “navigation by judgment” is particularly valuable when environments are unpredictable and when accomplishing an aid program’s goals is hard to accurately measure.

Highlighting a crucial obstacle for effective global aid, Navigation by Judgment shows that the management of aid projects matters for aid effectiveness

THE MODEL THINKER What You Need to Know to Make Data Work for You

by Scott E. Page. Published by Basic Books, 2018

Book review by Carol Wells “Page proposes a “many-model paradigm,” where we apply several mathematical models to a single problem. The idea is to replicate “the wisdom of the crowd” which, in groups like juries, has shown us that input from many sources tends to be more accurate, complete, and nuanced than input from a single source”

Contents:

Chapter 1 – The Many-Model Thinker
Chapter 2 – Why Model?
Chapter 3 – The Science of Many Models
Chapter 4 – Modeling Human Actors
Chapter 5 – Normal Distributions: The Bell Curve
Chapter 6 – Power-Law Distributions: Long Tails
Chapter 7 – Linear Models
Chapter 8 – Concavity and Convexity
Chapter 9 – Models of Value and Power
Chapter 10 – Network Models
Chapter 11 – Broadcast, Diffusion, and Contagion
Chapter 12 – Entropy: Modeling Uncertainty
Chapter 13 – Random Walks
Chapter 14 – Path Dependence
Chapter 15 – Local Interaction Models
Chapter 16 – Lyapunov Functions and Equilibria
Chapter 17 – Markov Models
Chapter 18 – Systems Dynamics Models
Chapter 19 – Threshold Models with Feedbacks
Chapter 20 – Spatial and Hedonic Choice
Chapter 21 – Game Theory Models Times Three
Chapter 22 – Models of Cooperation
Chapter 23 – Collective Action Problems
Chapter 24 – Mechanism Design
Chapter 25 – Signaling Models
Chapter 26 – Models of Learning
Chapter 27 – Multi-Armed Bandit Problems
Chapter 28 – Rugged-Landscape Models
Chapter 29 – Opioids, Inequality, and Humility

From his Coursera course, which the book builds on: “We live in a complex world with diverse people, firms, and governments whose behaviors aggregate to produce novel, unexpected phenomena. We see political uprisings, market crashes, and a never-ending array of social trends. How do we make sense of it? Models. Evidence shows that people who think with models consistently outperform those who don’t. And, moreover, people who think with lots of models outperform people who use only one. Why do models make us better thinkers? Models help us to better organize information – to make sense of that fire hose or hairball of data (choose your metaphor) available on the Internet. Models improve our abilities to make accurate forecasts. They help us make better decisions and adopt more effective strategies. They even can improve our ability to design institutions and procedures. In this class, I present a starter kit of models: I start with models of tipping points. I move on to cover models explain the wisdom of crowds, models that show why some countries are rich and some are poor, and models that help unpack the strategic decisions of firm and politicians. The models covered in this class provide a foundation for future social science classes, whether they be in economics, political science, business, or sociology. Mastering this material will give you a huge leg up in advanced courses. They also help you in life. Here’s how the course will work. For each model, I present a short, easily digestible overview lecture. Then, I’ll dig deeper. I’ll go into the technical details of the model. Those technical lectures won’t require calculus but be prepared for some algebra. For all the lectures, I’ll offer some questions and we’ll have quizzes and even a final exam. If you decide to do the deep dive, and take all the quizzes and the exam, you’ll receive a Course Certificate. If you just decide to follow along for the introductory lectures to gain some exposure that’s fine too. It’s all free. And it’s all here to help make you a better thinker!”

Some of his online videos on Coursera

Other videos

Reflecting the Past, Shaping the Future: Making AI Work for International Development

USAID, September 2018. 98 pages. Available as PDF

Rick Davies comment: A very good overview, balanced, informative, with examples. Worth reading from beginning to end.

Contents

Introduction
Roadmap: How to use this document
Machine learning: Where we are and where we might be going
• ML and AI: What are they?
• How ML works: The basics
• Applications in development
• Case study: Data-driven agronomy and machine learning
at the International Center for Tropical Agriculture
• Case study: Harambee Youth Employment Accelerator
Machine learning: What can go wrong?
• Invisible minorities
• Predicting the wrong thing
• Bundling assistance and surveillance
• Malicious use
• Uneven failures and why they matter
How people influence the design and use of ML tools
• Reviewing data: How it can make all the difference
• Model-building: Why the details matter
• Integrating into practice: It’s not just “Plug and Play”
Action suggestions: What development practitioners can do today
• Advocate for your problem
• Bring context to the fore
• Invest in relationships
• Critically assess ML tools
Looking forward: How to cultivate fair & inclusive ML for the future
Quick reference: Guiding questions
Appendix: Peering under the hood [ gives more details on specific machine learning algorithms]

See also the associated USAID blog posting and maybe also  How can machine learning and artificial intelligence be used in development interventions and impact evaluations?

 

 

The Book of Why: The New Science of Cause and Effect

by Judea Pearl, Allen Lane, May 2018

Publisher blurb: “‘Correlation does not imply causation.’ This mantra was invoked by scientists for decades in order to avoid taking positions as to whether one thing caused another, such as smoking and cancer and carbon dioxide and global warming. But today, that taboo is dead. The causal revolution, sparked by world-renowned computer scientist Judea Pearl and his colleagues, has cut through a century of confusion and placed cause and effect on a firm scientific basis. Now, Pearl and science journalist Dana Mackenzie explain causal thinking to general readers for the first time, showing how it allows us to explore the world that is and the worlds that could have been. It is the essence of human and artificial intelligence. And just as Pearl’s discoveries have enabled machines to think better, The Book of Why explains how we can think better.

Introduction: Mind over data (pdf copy)

Chapter 1: The Ladder of Causation (pdf copy)

Reviews: None found yet, but they will be listed here when found

 

The Politics of Evidence: From evidence-based policy to the good governance of evidence

by Justin Parkhurst © 2017 – Routledge,
Available as pdf and readable online, and hardback or paperback

“There has been an enormous increase in interest in the use of evidence for public policymaking, but the vast majority of work on the subject has failed to engage with the political nature of decision making and how this influences the ways in which evidence will be used (or misused) within political areas. This book provides new insights into the nature of political bias with regards to evidence and critically considers what an ‘improved’ use of evidence would look like from a policymaking perspective”

“Part I describes the great potential for evidence to help achieve social goals, as well as the challenges raised by the political nature of policymaking. It explores the concern of evidence advocates that political interests drive the misuse or manipulation of evidence, as well as counter-concerns of critical policy scholars about how appeals to ‘evidence-based policy’ can depoliticise political debates. Both concerns reflect forms of bias – the first representing technical bias, whereby evidence use violates principles of scientific best practice, and the second representing issue bias in how appeals to evidence can shift political debates to particular questions or marginalise policy-relevant social concerns”

“Part II then draws on the fields of policy studies and cognitive psychology to understand the origins and mechanisms of both forms of bias in relation to political interests and values. It illustrates how such biases are not only common, but can be much more predictable once we recognise their origins and manifestations in policy arenas”

“Finally, Part III discusses ways to move forward for those seeking to improve the use of evidence in public policymaking. It explores what constitutes ‘good evidence for policy’, as well as the ‘good use of evidence’ within policy processes, and considers how to build evidence-advisory institutions that embed key principles of both scientific good practice and democratic representation. Taken as a whole, the approach promoted is termed the ‘good governance of evidence’ – a concept that represents the use of rigorous, systematic and technically valid pieces of evidence within decision-making processes that are representative of, and accountable to, populations served”

Contents
Part I: Evidence-based policymaking – opportunities and challenges
Chapter 1. Introduction
Chapter 2. Evidence-based policymaking – an important first step, and the need to take the next
Part II: The politics of evidence
Chapter 3. Bias and the politics of evidence
Chapter 4. The overt politics of evidence – bias and the pursuit of political interests
Chapter 5. The subtle politics of evidence – the cognitive-political origins of bias
Part III: Towards the good governance of evidence
Chapter 6. What is ‘good evidence for policy’? From hierarchies to appropriate evidence.
Chapter 7. What is the ‘good use of evidence’ for policy?
Chapter 8. From evidence-based policy to the good governance of evidence

OPM’s approach to assessing Value for Money

by Julian King, Oxford Policy Management. January 2018. Available as pdf

Excerpt from Foreword:

In 2016, Oxford Policy Management (OPM) teamed up with Julian King, an evaluation specialist, who worked with staff from across the company to develop the basis of a robust and distinct OPM approach to assessing VfM. The methodology was successfully piloted during the annual reviews of the Department for International Development’s (DFID) Sub-National Governance programme in Pakistan and MUVA, a women’s economic empowerment programme in Mozambique. The approach involves making transparent, evidence-based judgements about how well resources are being used, and whether the value derived is good enough to justify the investment.

To date, we have applied this approach on upwards of a dozen different development projects and programmes, spanning a range of clients, countries, sectors, and budgets. It has been well received by our clients (both funding agencies and partner governments) and project teams alike, who in particular appreciate the use of explicit evaluative reasoning. This involves developing definitions of what acceptable / good / excellent VfM looks like, in the context of each specific project. Critically, these definitions are co-developed and endorsed upfront, in advance of implementation and before the evidence is gathered, which provides an agreed, objective, and transparent basis for making judgements.

Table of contents
Foreword 1
Acknowledgements 3
Executive summary 4
1 Background 7
1.1 What is VfM? 7
1.2 Why evaluate VfM? 7
1.3 Context 8
1.4 Overview of this guide 8
2 Conceptual framework for VfM assessment 9
2.1 Explicit evaluative reasoning 9
2.2 VfM criteria 10
2.3 DFID’s VfM criteria: the Four E’s 10
2.4 Limitations of the Four E’s 12
2.5 Defining criteria and standards for the Four E’s 13
2.6 Mixed methods evidence 15
2.7 Economic analysis 15
2.8 Complexity and emergent strategy 17
2.9 Perspective matters in VfM 19
2.10 Integration with M&E frameworks 19
2.11 Timing of VfM assessments 20
2.12 Level of effort required 20
3 Designing and implementing a VfM framework 21
3.1 Step 1: theory of change 21
3.2 Steps 2 and 3: VfM criteria and standards 22
3.3 Step 4: identifying evidence required 26
3.4 Step 5: gathering evidence 26
3.5 Steps 6–7: analysis, synthesis, and judgements 27
3.6 Step 8: reporting 29
Bibliography 30
Contact us

Review by Dr E. Jane Davidson, author of Evaluation Methodology Basics (Sage, 2005) and Director of Real Evaluation LLC, Seattle

Finally, an approach to Value for Money that breaks free of the “here’s the formula” approach and instead emphasises the importance of thoughtful and well-evidenced evaluative reasoning. Combining an equity lens with insights and perspectives from diverse stakeholders helps us understand the value of different constellations of outcomes relative to the efforts and investments required to achieve them. This step-by-step guide helps decision makers figure out how to answer the VfM question in an intelligent way when some of the most valuable outcomes may be the hardest to measure – as they so often are.

 

Bit by Bit Social Research in the Digital Age

by Matthew J. Salganik, Princeton University Press, 2017

Very positive reviews by…

Selected quotes:

“Overall, the book relies on a repeated narrative device, imagining how a social scientist and a data scientist might approach the same research opportunity. Salganik suggests that where data scientists are glass-half-full people and see opportunities, social scientists are quicker to highlight problems (the glass-half-empty camp). He is also upfront about how he has chosen to write the book, adopting the more optimistic view of the data scientist, while holding on to the caution expressed by social scientists”

“Salganik argues that data scientists most often work with “readymades”, social scientists with “custommades”, illustrating the point through art: data scientists are more like Marcel Duchamp, using existing objects to make art; meanwhile, social scientists operate in the custom-made style of Michelangelo, which offers a neat fit between research questions and data, but does not scale well. The book is thus a call to arms, to encourage more interdisciplinary research and for both sides to see the potential merits and drawbacks of each approach. It will be particularly welcome to researchers who have already started to think along similar lines, of which I suspect there are many”

Illustrates important ideas with examples of outstanding research

Combines ideas from social science and data science in an accessible style and without jargon

Goes beyond the analysis of “found” data to discuss the collection of “designed” data such as surveys, experiments, and mass collaboration

Features an entire chapter on ethics

Includes extensive suggestions for further reading and activities for the classroom or self-study

Matthew J. Salganik is professor of sociology at Princeton University, where he is also affiliated with the Center for Information Technology Policy and the Center for Statistics and Machine Learning. His research has been funded by Microsoft, Facebook, and Google, and has been featured on NPR and in such publications as the New Yorker, the New York Times, and the Wall Street Journal.

Contents

Preface
1 Introduction
2 Observing Behavior
3 Asking Questions
4 Running Experiments
5 Creating Mass Collaboration
6 Ethics
7 The Future
Acknowledgments
References
Index

More detailed contents page available via Amazon Look Inside

PS: See also this Vimeo video presentation by Salganik: Wiki Surveys – Open and Quantifiable Social Data Collection plus this PLOS paper on the same topic.

 

Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor.

Virginia Eubanks, (2018), New York, NY: St. Martin’s Press

Unfortunately, a contents list does not seem to be available online.  But here is a  lengthy excerpt from the book.

And here is a YouTube interview with the author, a University of Albany political scientist Virginia Eubanks discusses her new book “Automating Inequality: How High Tech Tools Profile, Police, and Punish the Poor.” (Taped: 12/05/2017)

 

Impact Evaluation of Development Interventions A Practical Guide

by  Howard White David A. Raitzer. Published by Asian Development Bank. 2017. Available as a pdf (3.12Mb)

The publisher says “This book offers guidance on the principles, methods, and practice of impact evaluation. It contains material for a range of audiences, from those who may use or manage impact evaluations to applied researchers”

“Impact evaluation is an empirical approach to estimating the causal effects of interventions, in terms of both magnitude and statistical significance. Expanded use of impact evaluation techniques is critical to rigorously derive knowledge from development operations and for development investments and policies to become more evidence-based and effective. To help backstop more use of impact evaluation approaches, this book introduces core concepts, methods, and considerations for planning, designing, managing, and implementing impact evaluation, supplemented by examples. The topics covered range from impact evaluation purposes to basic principles, specific methodologies, and guidance on field implementation. It has materials for a range of audiences, from those who are interested in understanding evidence on “what works” in development, to those who will contribute to expanding the evidence base as applied researchers.”

Contents 

  • Introduction: Impact Evaluation for Evidence-Based Development
  • Using Theories of Change to Identify Impact Evaluation Questions
  • The Core Concepts of Impact Evaluation
  • Randomized Controlled Trials
  • Nonexperimental Designs
  • What and How to Measure: Data Collection for Impact Evaluation
  • Sample Size Determination for Data Collection
  • Managing the Impact Evaluation Process
  • Appendixes

Rick Davies’ comments: I have only scanned, not read, this book. But some of the sections that I found of interest included:

  • 3.4 Time Dimension of Impacts…not always covered, but very important when planning the timing of evaluations of any kind
  • Page 2: “Impact evaluations are empirical studies that quantify the causal effects of interventions on outcomes of interest” I am surprised that the word “explain” is not also included in this definition. Or perhaps it is an intentionally minimalist definition, and omission does not mean it has to be ignored
  • Page 23 on the Funnel of Attribution, which I would like to see presented in the form of overlapping sets
  • There could be better acknowledgment by referencing of other sources e.g to Outcome Mapping (p25, re behavioral change) and Realist Evaluation (p41)
  • Good explanations of the technical terms used, on page 42 and 44 for example
  • Overcoming resistance to RCTs (p59) and 10 things that can go wrong with RCTs (p61)
  • The whole of chapter 6 on data collection
  • and lots more…

The Tyranny of Metrics

The Tyranny of Metrics, by Jerry Z Muller, Princeton University Press, RRP£19.95/ $24.95, 240 pages

See Tim Harford’s review of this book in the Financial Times, 24, January 2018

Some quotes: Muller shows that metrics are often used as a substitute for relevant experience, by managers with generic rather than specific expertise. Muller does not claim that metrics are always useless, but that we expect too much from them as a tool of management. ….

The Tyranny of Metrics does us a service in briskly pulling together parallel arguments from economics, management science, philosophy and psychology along with examples from education, policing, medicine, business and the military.

 In an excellent final chapter, Muller summarises his argument thus: “measurement is not an alternative to judgement: measurement demands judgement: judgement about whether to measure, what to measure, how to evaluate the significance of what’s been measured, whether rewards and penalties will be attached to the results, and to whom to make the measurements available”. 

 The book does not engage seriously enough with the possibility that the advantages of metric-driven accountability might outweigh the undoubted downsides. Tellingly, Muller complains of a university ratings metric that rewards high graduation rates, access for disadvantaged students, and low costs. He says these requirements are “mutually exclusive”, but they are not. They are in tension with each other,

Nor does this book reckon with evidence that mechanical statistical predictions often beat the subjective judgment of experts.

…and perhaps most curiously, there is no discussion of computers, cheap sensors, or big data. In this respect, at least, the book could have been written in the 1980s.

Table of Contents

Introduction 1
I THE ARGUMENT
1 The Argument in a Nutshell 17
2 Recurring Flaws 23
II THE BACKGROUND
3 The Origins of Measuring and Paying for Performance 29
4 Why Metrics Became So Popular 39
5 Principals, Agents, and Motivation 49
6 Philosophical Critiques 59
III THE MISMEASURE OF ALL THINGS? Case Studies
7 Colleges and Universities 67
8 Schools 89
9 Medicine 103
10 Policing 125
11 The Military 131
12 Business and Finance 137
13 Philanthropy and Foreign Aid 153
EXCURSUS
14 When Transparency Is the Enemy of Performance: Politics, Diplomacy, Intelligence, and Marriage 159
IV CONCLUSIONS
15 Unintended but Predictable Negative Consequences 169
16 When and How to Use Metrics: A Checklist 175
Acknowledgments 185
Notes 189
Index 213

Search inside this book using a Google Books view