On prediction, Nate Silver’s “The Signal and the Noise”

Title The Signal and the Noise: The Art and Science of Prediction
Author Nate Silver
Publisher Penguin UK, 2012
ISBN 1846147530, 9781846147531
Length 544 pages

Available on Amazon Use Google Books to read the first chapter.

RD Comment: Highly recommended reading. Reading this book reminded me of M&E data I had to examine on a large maternal and child health project in Indonesia. Rates on key indicators were presented for each of the focus districts for the year prior to the project started, then for each year during the four year project period. I remember thinking how variable these numbers were, there was nothing like a trend over time in any of the districts. Of course what I was looking at was probably largely noise, variations arising from changes in who and how the underlying data was collected and reported.This sort of situation is by no means uncommon. Most projects, if they have a base line at all, have baseline data from one year prior to when the project started. Subsequent measures of change are then, ideally, compared to that baseline. This arrangement assumes minimal noise, which is a tad optimistic. The alternative, which should not be so difficult in large bilateral projects dealing with health and education systems for example, would be to have a baseline data series covering the preceding x years, where x is at least as long as the expected duration of the proposed project.

See also Malkiel’s review in the Wall Street Journal (Telling Lies From Statistics). Malkiel is author of “A Random Walk Down Wall Street.” While a positive review overall, he charges Silver with ignoring false positives when claiming that some recent financial crises were predictable. Reviews also available in The Guardian. and LA Times. Nate Silver also writes a well known blog for the New York Times.

Addressing attribution of cause and effect in small n impact evaluations: towards an integrated framework

Howard White and Daniel Phillips,  International Initiative for Impact Evaluation, Working Paper 15, May 2012 Available as MSWord doc


With the results agenda in the ascendancy in the development community, there is an increasing need to demonstrate that development spending makes a difference, that it has an impact. This requirement to demonstrate results has fuelled an increase in the demand for, and production of, impact evaluations. There exists considerable consensus among impact evaluators conducting large n impact evaluations involving tests of statistical difference in outcomes between the treatment group and a properly constructed comparison group. However, no such consensus exists when it comes to assessing attribution in small n cases, i.e. when there are too few units of assignment to permit tests of statistical difference in outcomes between the treatment group and a properly constructed comparison group.

We examine various evaluation approaches that could potentially be suitable for small n analysis and find that a number of them share a methodological core which could provide a basis for consensus. This common core involves the specification of a theory of change together with a number of further alternative causal hypotheses. Causation is established beyond reasonable doubt by collecting evidence to validate, invalidate, or revise the hypothesised explanations, with the goal of rigorously evidencing the links in the actual causal chain.

We argue that, properly applied, approaches which undertake these steps can be used to address attribution of cause and effect. However, we also find that more needs to be done to ensure that small n evaluations minimise the biases which are likely to arise from the collection, analysis and reporting of qualitative data. Drawing on insights from the field of cognitive psychology, we argue that there is scope for considerable bias, both in the way in which respondents report causal relationships, and in the way in which evaluators gather and present data; this points to the need to incorporate explicit and systematic approaches to qualitative data collection and analysis as part of any small n evaluation.


Social Psychology and Evaluation

by Melvin M. Mark PhD (Editor), Stewart I. Donaldson PhD (Editor), Bernadette Campbell PhD (Editor) Guilford Press, May 2011. Available on Google Books.
Book burb “This compelling work brings together leading social psychologists and evaluators to explore the intersection of these two fields and how their theory, practices, and research findings can enhance each other. An ideal professional reference or student text, the book examines how social psychological knowledge can serve as the basis for theory-driven evaluation; facilitate more effective partnerships with stakeholders and policymakers; and help evaluators ask more effective questions about behavior. Also identified are ways in which real-world evaluation findings can identify gaps in social psychological theory and test and improve the validity of social psychological findings–for example, in the areas of cooperation, competition, and intergroup relations. The volume includes a useful glossary of both fields’ terms and offers practical suggestions for fostering cross-fertilization in research, graduate training, and employment opportunities. Each chapter features introductory and concluding comments from the editors.”
%d bloggers like this: