THE MODEL THINKER What You Need to Know to Make Data Work for You

by Scott E. Page. Published by Basic Books, 2018

Book review by Carol Wells “Page proposes a “many-model paradigm,” where we apply several mathematical models to a single problem. The idea is to replicate “the wisdom of the crowd” which, in groups like juries, has shown us that input from many sources tends to be more accurate, complete, and nuanced than input from a single source”

Contents:

Chapter 1 – The Many-Model Thinker
Chapter 2 – Why Model?
Chapter 3 – The Science of Many Models
Chapter 4 – Modeling Human Actors
Chapter 5 – Normal Distributions: The Bell Curve
Chapter 6 – Power-Law Distributions: Long Tails
Chapter 7 – Linear Models
Chapter 8 – Concavity and Convexity
Chapter 9 – Models of Value and Power
Chapter 10 – Network Models
Chapter 11 – Broadcast, Diffusion, and Contagion
Chapter 12 – Entropy: Modeling Uncertainty
Chapter 13 – Random Walks
Chapter 14 – Path Dependence
Chapter 15 – Local Interaction Models
Chapter 16 – Lyapunov Functions and Equilibria
Chapter 17 – Markov Models
Chapter 18 – Systems Dynamics Models
Chapter 19 – Threshold Models with Feedbacks
Chapter 20 – Spatial and Hedonic Choice
Chapter 21 – Game Theory Models Times Three
Chapter 22 – Models of Cooperation
Chapter 23 – Collective Action Problems
Chapter 24 – Mechanism Design
Chapter 25 – Signaling Models
Chapter 26 – Models of Learning
Chapter 27 – Multi-Armed Bandit Problems
Chapter 28 – Rugged-Landscape Models
Chapter 29 – Opioids, Inequality, and Humility

From his Coursera course, which the book builds on: “We live in a complex world with diverse people, firms, and governments whose behaviors aggregate to produce novel, unexpected phenomena. We see political uprisings, market crashes, and a never-ending array of social trends. How do we make sense of it? Models. Evidence shows that people who think with models consistently outperform those who don’t. And, moreover, people who think with lots of models outperform people who use only one. Why do models make us better thinkers? Models help us to better organize information – to make sense of that fire hose or hairball of data (choose your metaphor) available on the Internet. Models improve our abilities to make accurate forecasts. They help us make better decisions and adopt more effective strategies. They even can improve our ability to design institutions and procedures. In this class, I present a starter kit of models: I start with models of tipping points. I move on to cover models explain the wisdom of crowds, models that show why some countries are rich and some are poor, and models that help unpack the strategic decisions of firm and politicians. The models covered in this class provide a foundation for future social science classes, whether they be in economics, political science, business, or sociology. Mastering this material will give you a huge leg up in advanced courses. They also help you in life. Here’s how the course will work. For each model, I present a short, easily digestible overview lecture. Then, I’ll dig deeper. I’ll go into the technical details of the model. Those technical lectures won’t require calculus but be prepared for some algebra. For all the lectures, I’ll offer some questions and we’ll have quizzes and even a final exam. If you decide to do the deep dive, and take all the quizzes and the exam, you’ll receive a Course Certificate. If you just decide to follow along for the introductory lectures to gain some exposure that’s fine too. It’s all free. And it’s all here to help make you a better thinker!”

Some of his online videos on Coursera

Other videos

Participatory modelling and mental models

These are the topics covered by two papers I have come across today, courtesy of Peter Barbrook-Johnson, of Surrey University. Both papers provide good overviews of their respective fields.

Moon, K., Adams, V. M., Dickinson, H., Guerrero, A. M., Biggs, D., Craven, L., … Ross, H. (2019). Mental models for conservation research and practice. Conservation Letters, 1–11.

Abstract: Conservation practice requires an understanding of complex social-ecological processes of a system and the different meanings and values that people attach to them. Mental models research offers a suite of methods that can be used to reveal these understandings and how they might affect conservation outcomes. Mental models are representations in people’s minds of how parts of the world work. We seek to demonstrate their value to conservation and assist practitioners and researchers in navigating the choices of methods available to elicit them. We begin by explaining some of the dominant applications of mental models in conservation: revealing individual assumptions about a system, developing a stakeholder-based model of the system, and creating a shared pathway to conservation. We then provide a framework to “walk through” the stepwise decisions in mental models research, with a focus on diagram based methods. Finally, we discuss some of the limitations of mental models research and application that are important to consider. This work extends the use of mental models research in improving our ability to understand social-ecological systems, creating a powerful set of tools to inform and shape conservation initiatives.

PDF copy here

Voinov, A. (2018). Tools and methods in participatory modeling: Selecting the right tool for the job. Environmental Modelling and Software, 109, 232–255.

Abstract: Various tools and methods are used in participatory modelling, at di?erent stages of the process and for di?erent purposes. The diversity of tools and methods can create challenges for stakeholders and modelers when selecting the ones most appropriate for their projects. We o?er a systematic overview, assessment, and categorization of methods to assist modelers and stakeholders with their choices and decisions. Most available literature provides little justi?cation or information on the reasons for the use of particular methods or tools in a given study. In most of the cases, it seems that the prior experience and skills of the modelers had a dominant e?ect on the selection of the methods used. While we have not found any real evidence of this approach being wrong, we do think that putting more thought into the method selection process and choosing the most appropriate method for the project can produce better results. Based
on expert opinion and a survey of modelers engaged in participatory processes, we o?er practical guidelines to improve decisions about method selection at di?erent stages of the participatory modeling process

PDF copy here

Subjective measures in humanitarian analysis

A note for ACAPS, by Aldo Benini, A. (2018). PDF available at https://www.acaps.org/sites/acaps/files/resources/files/20180115_acaps_technical_note_subjective_measures_full_report.pdf

Purpose and motivation

This note seeks to sensitize analysts to the growing momentum of subjective methods and measures around, and eventually inside, the humanitarian field. It clarifies the nature of subjective measures and their place in humanitarian needs assessments. It weighs their strengths and challenges. It discusses, in considerable depth, a small number of instruments and methods that are ready, or have good potential, for humanitarian analysis.

Post World War II culture and society have seen an acceleration of subjectivity in all institutional realms, although at variable paces. The sciences responded with considerable lag. They have created new methodologies – “mixed methods” (quantitative and qualitative), “subjective measures”, self-assessments of all kinds – that claim an equal playing field with distant, mechanical objectivity. For the period 2000-2012, using the search term “subjective measure”, Google Scholar returns around 600 references per year; for the period 2013 – fall 2017, the figure quintuples to 3,000. Since 2012, the United Nations has been publishing the annual World Happiness Report; its first edition discusses validity and reliability of subjective measures at length.

Closer to the humanitarian domain, poverty measurement has increasingly appreciated subjective data. Humanitarian analysis is at the initial stages of feeling the change. Adding “AND humanitarian” to the above search term produces 8 references per year for the first period, and 40 for the second – a trickle, but undeniably an increase. Other searches confirm the intuition that something is happening below the surface; for instance, “mixed method  AND humanitarian” returns 110 per year in the first, and 640 in the second period – a growth similar to that of “subjective measures”.

Still in some quarters subjectivity remains suspect. Language matters. Some collaborations on subjective measures have preferred billing them as “experience-based measures”. Who doubts experience? It is good salesmanship, but we stay with “subjective” unless the official name of the measure contains “experience”.

What follows 

We proceed as follows: In the foundational part, we discuss the nature of, motivation for, and reservations against, subjective measures. We provide illustrations from poverty measurement and from food insecurity studies. In the second part, we present three tools – scales, vignettes and hypothetical questions – with generic pointers as well as with specific case studies. We conclude with recommendations and by noting instruments that we have not covered, but which are likely to grow more important in years to come

Rick Davies comment: High recommended!

Reflecting the Past, Shaping the Future: Making AI Work for International Development

USAID, September 2018. 98 pages. Available as PDF

Rick Davies comment: A very good overview, balanced, informative, with examples. Worth reading from beginning to end.

Contents

Introduction
Roadmap: How to use this document
Machine learning: Where we are and where we might be going
• ML and AI: What are they?
• How ML works: The basics
• Applications in development
• Case study: Data-driven agronomy and machine learning
at the International Center for Tropical Agriculture
• Case study: Harambee Youth Employment Accelerator
Machine learning: What can go wrong?
• Invisible minorities
• Predicting the wrong thing
• Bundling assistance and surveillance
• Malicious use
• Uneven failures and why they matter
How people influence the design and use of ML tools
• Reviewing data: How it can make all the difference
• Model-building: Why the details matter
• Integrating into practice: It’s not just “Plug and Play”
Action suggestions: What development practitioners can do today
• Advocate for your problem
• Bring context to the fore
• Invest in relationships
• Critically assess ML tools
Looking forward: How to cultivate fair & inclusive ML for the future
Quick reference: Guiding questions
Appendix: Peering under the hood [ gives more details on specific machine learning algorithms]

See also the associated USAID blog posting and maybe also  How can machine learning and artificial intelligence be used in development interventions and impact evaluations?

 

 

Bayesian belief networks – Their use in humanitarian scenarios An invitation to explorers

By Aldo Benini. July 2018. Available here as a pdf

Summary

This is an invitation for humanitarian data analysts and others –  assessment, policy and advocacy specialists, response planners and grant writers – to enhance the reach and quality of scenarios by means of so-called Bayesian belief networks. Belief networks are a powerful technique for structuring scenarios in a qualitative as well as quantitative approach. Modern software, with elegant graphical user interfaces, makes for rapid learning, convenient drafting, effortless calculation and compelling presentation in workshops, reports and Web pages.

In recent years, scenario development in humanitarian analysis has grown. Until now, however, the community has hardly ever tried out belief networks, in contrast to the natural disaster and ecological communities. This note offers a small demonstration. We build a simple belief network using information currently (mid-July 2018) available on a recent violent crisis in Nigeria. We produce and discuss several possible scenarios for the next three months, computing probabilities of two humanitarian outcomes.

Figure 1: Belief network with probability bar charts (segment)

We conclude with reflections on the contributions of belief networks to humanitarian scenario building and elsewhere. While much speaks for this technique, the growth of competence, the uses in workshops and the interpretation of graphs and statistics need to be fostered cautiously, with consideration for the real-world complexity and for the doubts that stakeholders may harbor about quantitative approaches. This note is in its first draft. It needs to be revised, possibly by several authors, in order to connect to progress in humanitarian scenario methodologies, expert judgment and workshop didactics

RD Comment: See also the comment and links provided below by Simon Henderson on his experience (with IOD/PARC) of trialing the use of Bayesian belief networks

Representing Theories of Change: Technical Challenges with Evaluation Consequences

A CEDIL Inception Paper, by Rick Davies. August 2018.  A pdf copy is available here 

 

Abstract: This paper looks at the technical issues associated with the representation of Theories of Change and the implications of design choices for the evaluability of those theories. The focus is on the description of connections between events rather than the events themselves, because this is seen as a widespread design weakness. Using examples and evidence from Internet sources six structural problems are described along with their consequences for evaluation.

The paper then outlines a range of different ways of addressing these problems which could be used by programme designers, implementers and evaluators. The paper concludes with some caution speculating on why the design problems are so endemic but also pointing a way forward. Four strands of work are identified that CEDIL and DFID could invest in to develop solutions identified in the paper.

Table of Contents

What is a theory of change?
What is the problem?
A summary of the problems….
And a word in defence….
Six possible ways forward
Why so little progress?
Implications for CEDIL and DFID
References

Postscript: Michael Bamberger’s 2018 07 13 comments on this paper

I think this is an extremely useful and well-documented paper.  Framing the discussion around the 6 problems, and the possible ways forward is a good way to organize the presentation.  The documentation and links that you present will be greatly appreciated, as well as the graphical illustrations of the different approaches.
Without getting into too much detail, the following are a few general thoughts on this very useful paper:
  1. A criticism of many TOCs is that they only describe how a program will achieve its intended objectives and they do not address th challenges of identifying and monitoring potential unintended and often undesired, outcomes (UOs)  While some UOs could not have been anticipated, many others could, and these should perhaps be built into the model.  For example, there is an extensive literature documenting negative consequences for women of political and economic empowerment, often including increased domestic violence.  So these could be built into the TOC, but in many cases they are not.
  2. Many, but certainly not all, TOCs do not adequately address the challenges of emergence the fact that the environment in which the program operates; the political and organizational arrangements; and the characteristics of the target population and how they respond to the program are all likely to change significantly during the life of the project.  Many TOCs implicitly assume that the project and its environment remain relatively stable throughout the project lifetime.  Of course, many of the models you describe do not assume a stable environment, but it might be useful to flag the challenges of emergence. Many agencies are starting to become interested in agile project management to address the emergence challenge.
  3. Given the increasing recognition that most evaluation approaches do not adequately address complexity, and the interest in complexity-responsive evaluation approaches, you might like to focus more directly on how TOCs can address complexity.  Complexity is, of course, implicit in much of your discussion, but it might b useful to highlight the term.
  4. Do you think it would be useful to include a section on how big data and data analytics can strengthen the ability to develop more sophisticated TOCs.  Many agencies may feel that many of the techniques you mention would not be feasible with the kinds of data they collect and their current analytical tools.
  5. Related to the previous point, it might be useful to include a brief discussion of how accessible the quite sophisticated methods that you discuss would be to many evaluation offices.  What kinds of expertise would be required?  where would the data come from? how much would it cost.  You don’t ned to go into too much detail but many readers would like guidance on which approaches are likely to be accessible to which kinds of agency.
  6. Your discussion of “Why so little progress?” is critical.  It is my impression that among the agencies with whom I have worked,  while many evaluations pay lip-service to TOC, the full potential of the approach is very often not utilized.  Often the TOC is constructed at the start of a project with major inputs from an external consultant.  The framework is then rarely consulted again until the final evaluation report is being written, and there are even fewer instances where it is regularly tested, updated and revised.  There are of course many exceptions, and I am sure experience may be different with other kinds of agencies.  However, I think that many implementing agencies (and many donors) have very limited expectations concerning what they hope TOC will contribute.  There is probably very little appetite among many implementing agencies (as opposed to a few funding agencies such as DFID) for more refined models.
  7. Among agencies where this is the case, it will be necessary to demonstrate the value-added of investing time and resources in more refined TOCs.  So it might be useful to expand the discussion of the very practical, as opposed to the broader theoretical, justifications for investing in the existing TOC.
  8. In addition to the above considerations, many evaluators tend to be quite conservative in their choice of methodologies and they are often reluctant to adopt new methodologies – particularly if these use approaches with which they are not familiar.  New approaches, such as some of those you describe can also be seen as threatening if they might undermine the status of the evaluation professional as expert in his/her field.

Participatory approaches to the development of a Theory of Change: Beginnings of a list

Background

There have been quite a few generic guidance documents written on the use of Theories of Change. These are not the main focus of this list. Nevertheless, here are those I have come across:

Klein, M (2018) Theory of Change Quality Audit, at https://changeroo.com/toc-academy/posts/expert-toc-quality-audit-academy

UNDG (2017) Theory of Change – UNDAF Companion Guidance, UNDG.  https://undg.org/wp-content/uploads/2017/06/Theory-of-Change-UNDAF-Companion-Pieces.pdf

Van Es M, Guijt I and Vogel I (2015) Theory of Change Thinking in Practice. HIVOS. http://www.theoryofchange.nl/sites/default/files/resource/hivos_Theory of Change_guidelines_final_nov_2015.pdf.

Valters C (2015) Theories of Change: Time for a radical approach to learning in development. ODI. https://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/9835.pdf.

Rogers P (2014) Theory of Change. Methodological Briefs Impact Evaluation No. 2. UNICEF.
http://devinfolive.info/impact_evaluation/img/downloads/Theory_of_Change_ENG.pdf.

Vogel I (2012) Review of the use of ‘Theory of Change’ in international development. Review Report for DFID.
http://www.dfid.gov.uk/r4d/pdf/outputs/mis_spc/DFID_Theory of Change_Review_VogelV7.pdf

Vogel I (2012) ESPA guide to working with Theory of Change for research projects. LTS/ITAD for ESPA. http://www.espa.ac.uk/files/espa/ESPA-Theory-of-Change-Manual-FINAL.pdf

Stein, D., & Valters, C. (2012). Understanding Theory in Change in International Development. The Asia Foundation. http://www2.lse.ac.uk/internationalDevelopment/research/JSRP/downloads/JSRP1.SteinValters.pdf

James, C. (2011, September). Theory of Change Review. A Report Commissioned by Comic Relief. http://www.theoryofchange.org/pdf/James_Theory of Change.pdf

UNDAF (UNDG, 2017).

Participatory approaches to ToC construction

Burbaugh B, Seibel M and Archibald T (2017) Using a Participatory Approach to Investigate a Leadership Program’s Theory of Change. Journal of Leadership Education 16(1): 192–205.

Katherine Austin-Evelyn and Erin Williams  (2016) Mapping Change for Girls, One Post-It Note at a Time. Blog posting

Breuer E, Lee L, De Silva M, et al. (2016) Using theory of change to design and evaluate public health interventions: a systematic review. Implementation science: IS 11: 63. DOI: 10.1186/s13012-016-0422-6. Recommended

Breuer E, De Silva MJ, Fekadu A, et al. (2014) Using workshops to develop theories of change in five low and middle-income countries: lessons from the programme for improving mental health care (PRIME). International Journal of Mental Health Systems 8: 15. DOI: 10.1186/1752-4458-8-15.

De Silva MJ, Breuer E, Lee L, et al. (2014) Theory of Change: a theory-driven approach to enhance the Medical Research Council’s framework for complex interventions. Trials 15: 267. DOI: 10.1186/1745-6215-15-267.

Participatory Modelling: Beginnings of a list

What is Participatory Modelling?

Gray et al (2018) “The field of PM lies at the intersection of participatory approaches to planning, computational modeling, and environmental modeling”

Wikipedia: “Participatory modeling is a purposeful learning process for action that engages the implicit and explicit knowledge of stakeholders to create formalized and shared representation(s) of reality. In this process, the participants co-formulate the problem and use modeling practices to aid in the description, solution, and decision-making actions of the group. Participatory modeling is often used in environmental and resource management contexts. It can be described as engaging non-scientists in the scientific process. The participants structure the problem, describe the system, create a computer model of the system, use the model to test policy interventions, and propose one or more solutions. Participatory modeling is often used in natural resources management, such as forests or water.

There are numerous benefits from this type of modeling, including a high degree of ownership and motivation towards change for the people involved in the modeling process. There are two approaches which provide highly different goals for the modeling; continuous modeling and conference modeling.

Recent references
  • Olazabal M, Neumann MB, Foudi S, et al. (n.d.) Transparency and Reproducibility in Participatory Systems Modelling: the Case of Fuzzy Cognitive Mapping. Systems Research and Behavioral Science 0(0). DOI: 10.1002/sres.2519.
  • Gray S, Voinov A, Paolisso M, et al. (2018) Purpose, processes, partnerships, and products: four Ps to advance participatory socio-environmental modeling. Ecological Applications 28(1): 46–61. DOI: 10.1002/eap.1627.
  • Hedelin B, Evers M, Alkan-Olsson J, et al. (2017) Participatory modelling for sustainable development: Key issues derived from five cases of natural resource and disaster risk management. Environmental Science & Policy 76: 185–196. DOI: 10.1016/j.envsci.2017.07.001.
  • Basco-Carrera L, Warren A, van Beek E, et al. (2017) Collaborative modelling or participatory modeling? A framework for water resources management. Environmental Modelling & Software 91: 95–110. DOI: 10.1016/j.envsoft.2017.01.014.
  • Eker S, Zimmermann N, Carnohan S, et al. (2017) Participatory system dynamics modelling for housing, energy and wellbeing interactions. Building Research & Information 0(0): 1–17. DOI: 10.1080/09613218.2017.1362919.
  • Voinov A, Kolagani N, McCall MK, et al. (2016) Modelling with stakeholders – Next generation. Environmental Modelling and Software 77: 196220. DOI: 10.1016/j.envsoft.2015.11.016.
  • Voinov AA (2010) Participatory Modeling: What, Why, How? University of Twente. Available at:  http://www2.econ.iastate.edu/tesfatsi/ParticipatoryModelingWhatWhyHow.AVoinov.March2010.pdf 

See also Will Allen’s list of papers on participatory modelling

Computational Modelling of Public Policy: Reflections on Practice

Gilbert G, Ahrweiler P, Barbrook-Johnson P, et al. (2018) Computational Modelling of Public Policy: Reflections on Practice. Journal of Artificial Societies and Social Simulation 21: 1–14. pdf copy available

Abstract: Computational models are increasingly being used to assist in developing, implementing and evaluating public policy. This paper reports on the experience of the authors in designing and using computational models of public policy (‘policy models’, for short). The paper considers the role of computational models in policy making, and some of the challenges that need to be overcome if policy models are to make an effective contribution. It suggests that policy models can have an important place in the policy process because they could allow policy makers to experiment in a virtual world, and have many advantages compared with randomised control trials and policy pilots. The paper then summarises some general lessons that can be extracted from the authors’ experience with policy modelling. These general lessons include the observation that ofen the main benefit of designing and using a model is that it provides an understanding of the policy domain, rather than the numbers it generates; that care needs to be taken that models are designed at an appropriate level of abstraction; that although appropriate data for calibration and validation may sometimes be in short supply, modelling is ofen still valuable; that modelling collaboratively and involving a range of stakeholders from the outset increases the likelihood that the model will be used and will be fit for purpose; that attention needs to be paid to effective communication between modellers and stakeholders; and that modelling for public policy involves ethical issues that need careful consideration. The paper concludes that policy modelling will continue to grow in importance as a component of public policy making processes, but if its potential is to be fully realised, there will need to be a melding of the cultures of computational modelling and policy making.

Selected quotes: For these reasons, the ability to make ‘point predictions’, i.e. forecasts of specific values at a specific time in the future, is rarely possible. More possible is a prediction that some event will or will not take place, or qualitative statements about the type or direction of change of values. Understanding what sort of unexpected outcomes
can emerge and something of the nature of how these arise also helps design policies that can be responsive to unexpected outcomes when they do arise. It can be particularly helpful in changing environments to use the model to explore what might happen under a range of possible, but dfferent, potential futures – without any commitment about which of these may eventually transpire. Even more valuable is a finding that the model shows that certain outcomes could not be achieved given the assumptions of the model. An example of this is the use of a whole system energy model to develop scenarios that meet the decarbonisation goals set by the EU for 2050 (see, for example, RAENG 2015.)

Rick Davies comment: A concise and very informative summary with many useful references. Definitely worth reading! I like the big emphasis on the need for ongoing collaboration and communication between model developers and their clients and other model stakeholders However, I would have liked to see some discussion of the pros and cons of different approaches to modeling e.g. agent-based models vs Fuzzy Cognitive Mapping and other approaches. Not just examples of different modelling applications, useful as they were.

See also: Uprichard, E and Penn, A (2016) Dependency Models: A CECAN Evaluation and Policy Practice Note for policy analysts and evaluators. CECAN. Available at: https://www.cecan.ac.uk/sites/default/files/2018-01/EMMA%20PPN%20v1.0.pdf (accessed 6 June 2018).

Representing Theories of Change: Technical Challenges and Evaluation Consequences

 

CEDIL – Centre for Evaluation Lecture Series
The Centre of Excellence for Development Impact and Learning (CEDIL) and the Centre for Evaluation host a lecture series addressing methods and innovation in primary studies.

Watch the live-streamed lecture here

London School of Hygiene and Tropical Medicine. Lecture Two – Wednesday 30th May 2018 – Dr Rick Davies 12:45-14:00  Jerry Morris B, LSHTM 15-17 Tavistock Place, London, WC1H 9SH

“This lecture will summarise the main points of a paper of the same name. That paper looks at the technical issues associated with the representation of Theories of Change and the implications of design choices for the evaluability of those theories. The focus is on the description of connections between events, rather the events themselves, because this is seen as a widespread design weakness. Using examples and evidence from a range of Internet sources six structural problems are described, along with their consequences for evaluation. The paper then outlines six different ways of addressing these problems, which could be used by programme designers and by evaluators. These solutions range from simple to follow advice on designing more adequate diagrams, to the use of specialist software for the manipulation of much more complex static and dynamic network models. The paper concludes with some caution, speculating on why the design problems are so endemic but also pointing a way forward. Three strands of work are identified that CEDIL and DFID could invest in to develop solutions identified in the paper.”

The paper referred to in the lecture was commissioned by CEDIL and is now pending publication in a special issue of an evaluation journal

%d bloggers like this: