Diversity and Complexity

by Scott Page, 2011. Available on Google Books Princeton University Press, 14/07/2011 – 296 pages

Abstract: This book provides an introduction to the role of diversity in complex adaptive systems. A complex system–such as an economy or a tropical ecosystem–consists of interacting adaptive entities that produce dynamic patterns and structures. Diversity plays a different role in a complex system than it does in an equilibrium system, where it often merely produces variation around the mean for performance measures. In complex adaptive systems, diversity makes fundamental contributions to system performance. Scott Page gives a concise primer on how diversity happens, how it is maintained, and how it affects complex systems. He explains how diversity underpins system level robustness, allowing for multiple responses to external shocks and internal adaptations; how it provides the seeds for large events by creating outliers that fuel tipping points; and how it drives novelty and innovation. Page looks at the different kinds of diversity–variations within and across types, and distinct community compositions and interaction structures–and covers the evolution of diversity within complex systems and the factors that determine the amount of maintained diversity within a system.Provides a concise and accessible introduction. Shows how diversity underpins robustness and fuels tipping points .Covers all types of diversity. The essential primer on diversity in complex adaptive systems.

RD Comment: This book is very useful for thinking about the measurement of diversity. In 2000 I wrote a paper “Does Empowerment Start At Home? And If So, How Will We Recognise It?” in which I argued that…

“At the population level, diversity of behaviour can be seen as a gross indicator of agency (of the ability to make choices), relative to homogenous behaviour by the same set of people. Diversity of behaviour suggests there is a range of possibilities which individuals can pursue. At the other extreme is standardisation of behaviour, which we often associate with limited choice. The most notable example being perhaps that of an army. An army is a highly organised structure where individuality is not encouraged, and where standardised and predictable behaviour is very important. Like the term “NGO” or “non-profit”, diversity is defined by something that it is not –  a condition where there is no common constraint, which would otherwise lead to a homogeneity of response. Homogeneity of behaviour may arise from various sources of constraint. A flood may force all farmers in a large area to move their animals to the high ground. Everybody’s responses are the same, when compared to what they would be doing on normal day. At a certain time of the year all farmers may be planting the same crop. Here homogeneity of practice may reflect common constraints arising from a combination of sources: the nature of the physical environment, and the nature of particular local economies. Constraints on diversity can also arise within the assisting organisation. Credit programs can impose rules on loan use, specific repayment schedules and loan terms, as well as limiting when access to credit is available, or how quickly approval will be give.”

See also…

Measuring Results: A GSDRC Topic Guide

Available as linked pages on the Governance and Social Development Resource Centre (GSDRC), website as of August 2011

The guide is designed to provide a quick and easy way for development professionals to keep in touch with key debates and critical issues in the field of monitoring and evaluation. It will be updated on a quarterly basis.

About this guide
“How can the impact of governance and social development programmes be assessed with a view to improving their efficiency and effectiveness? What particular challenges are involved in monitoring and evaluating development interventions, and how can these be addressed? How can the ‘value for money’ of a particular intervention be determined?

Monitoring and evaluation (M&E) is vital to ensuring that lessons are learned in terms of what works, what does not, and why. M&E serves two main functions: 1) it builds accountability by demonstrating good use of public funds; and 2) it supports learning by contributing to knowledge about how and why programmes lead to intended (or unintended) outcomes. There can sometimes be a tension between these functions.

This guide introduces some of the core debates and considerations for development practitioners involved in designing and managing M&E activities. It introduces key tools and approaches, provides case studies of applying different methodological approaches, and presents lessons learned from international experience of M&E in a range of developing country contexts. While the guide focuses on M&E for governance and social development programmes, it has relevance for all programmes.

The guide was originally prepared by Claire Mcloughlin, and a comprehensive update was undertaken by Oliver Walton in July 2011. The GSDRC appreciates the contributions of Claire Vallings and Lina Payne (DFID) and Hugh Waddington and colleagues at 3ie. Comments, questions or documents for consideration should be sent to enquiries@gsdrc.org.”

The Clash of the Counter-bureaucracy and Development

“In this essay, Andrew Natsios describes what he sees as the most disruptive obstacles to development work in agencies such as USAID: layers and layers of bureaucracy. He gives a first-hand account of how this “counter-bureaucracy” disfigures USAID’s development practice and even compromises U.S. national security objectives. Most of all, he argues, the counter-bureaucracy’s emphasis on easy measurement is at odds with the fact that transformational programs are often the least measurable and involve elements of risk and uncertainty.

To overcome counter-bureaucracy barriers, Natsios suggests implementing a new measurement system, reducing the layers of oversight and regulation, and aligning programmatic goals with organizational incentives. Unless policymakers address the issue, he says, U.S. aid programs will be unable to implement serious development programs while complying with the demands of Washington.”

Revised 07-13-2010


The Big Push Back (and push forward)

“On the 22nd September, Rosalind Eyben organised a meeting of some seventy development practitioners and researchers worried about the current trend for funding organisations to support only those programmes designed to deliver easily measurable results, although these may not support transformative processes of positive and sustainable changes in people’s lives.

Following on from a major conference in May in the Netherlands about evaluative practices in relation to social transformation (http://evaluationrevisited.wordpress.com/), the meeting took
the first steps in strategizing collectively in support of these practices”  Attached is Rosalind’s brief report of the meeting.

PS: 11 October 2010. See the latest posting by Ros Eyben on this topic here, on the Hauser Centre blog

Measuring Results for Dutch Development Aid, Approaches and Future Directions

Date: October 4-7, 2010
Venue: Royal Tropical Institute, Amsterdam,

The International Institute of Social Studies and The Amsterdam Institute for International Development invite applications / submissions for a training and conference event on Measuring Results for Dutch Development Aid, Approaches and Future Directions with financial support from the Policy and Operations Evaluation Department of the Dutch Ministry of Development Cooperation.

Participation is free of charge, but places are limited.
Deadline for applications: September 10, 2010
Click here to apply

Objectives: Share results from and experiences with impact evaluation in developing countries, and discuss their relevance for Dutch development cooperation.

Target Audiences: Researchers, NGOs, consulting companies and policy makers in the Netherlands conducting or using impact evaluation to study the effectiveness of development assistance.

Confirmed speakers: Dr. Howard White, director of International Initiative for Impact Evaluation(3ie).
Dr. Paul Gertler, Professor of Economics, University of California, Berkeley.
Dr. Sulley Gariba, Executive Director, Institute for Policy Alternatives, Ghana.
Prof. Ruerd Ruben, director of the Policy and Operations Evaluation Department of the Dutch Ministry of Development Cooperation (starting Sept 1).

Submit a paper (optional): Contributed papers are sought in the area of (1) completed impact evaluations, (2) reviews of impact evaluations on a particular sector, (3) position papers on approaches to impact evaluations in relation to decision making.

Selection criteria: Quality of submission and/or professional link with result assessment for development assistance and/or participation in the impact evaluation training.

Maximum number of participants: 100