… Rethinking International Cooperation in a Complex World
by Ben Ramalingam, Oxford University Press, 2013. Viewable in part via Google Books (and fully searchable with key words)
Publishers summary:
A ground breaking book on the state of the aid business, bridging policy, practice and science. Gets inside the black box of aid to highlight critical flaws in the ways agencies learn, strategise, organise, and evaluate themselves. Shows how ideas from the cutting edge of complex systems science have been used to address social, economic and political issues, and how they can contribute to the transformation of aid. An open accessible style with cartoons by a leading illustrator. Draws on workshops, conferences, over five years of research, and hundreds of interviews.
Rick Davies comments (but not a review): Where to start…? This is a big book, in size and ambition. But also in the breadth of the author’s knowledge and contacts in the field. There have been many reviews of the book, so I will simply link to some here, to start with: Duncan Green (Oxfam), Tom Kirk (LSE), Nick Perkins (AllAfrica), Paul van Gardingen and Andrée Carter (SciDevnet), Melissa Leach (Steps Centre), Owen Barder, Philip Ball , IRIN , New Scientist and Lucy Noonan (Clear Horizon) See also Ben’s own Aid on the Edge of Chaos blog.
Evauation issues are discussed in two sections: Watching the Watchman (pages 101-122), and Performance Dynamics, Dynamic Performance (pages 351-356). That is about 7% of the book as a whole, which is a bigger percentage than most development projects spend on evaluation! Of course there is a lot more to Ben’s book that relates to evaluation outside of these sections.
One view of the idea of systems being on the edge of chaos is that it is about organisations (biological and social) evolving to a point where they find a viable balance between sensitivity to new information and retention of past information (as embedded in existing structures and processes) i.e learning strategies. That said, what strikes me the most about aid organisations, as a sector, is how stable they are. Perhaps way too stable. Mortality rates are very low compared to private sector enterprises. Does this suggest that as a set aid organisations are not as effective at learning as they could be?
I also wondered to what extent the idea of being on the edge of chaos (i.e a certain level of complexity) could be operationalised/measured, and thus developed into something that was more than a metaphor. However, Ben and other authors (Melanie Mitchel) have highlighted the limitations of various attempts to measure complexity. In fact the very attempt to do so, at least in a single (i.e. one dimensional) measure seems somewhat ironical. But perhaps degrees of complexity could be mapped in a space defined by multiple measures? For example: (a) diversity of agents, (b) density of connections between them, (c) the degrees of freedom or agency each agent has. …a speculation.
Ben has been kind enough to quote some of my views on complexity issues, including those on the representation of complexity (page 351). The limitations of linear Theories of Change (ToC) are discussed at various points in the book, and alternatives are explored, including network models and agent based simulation models. While I am sympathetic to their wider use I do continue to be surprised at how little complexity aid agency staff can actually cope with when presented with a ToC that has to be a working part of a Monitoring and Evaluation Framework, for a project. And I have a background concern that the whole enthusiasm for ToCs these days still belies a deep desire for plan-ablity that in reality is at odds with the real world within which aid agencies work.
In his chapter on Dynamic Change Ben describes an initiative called Artificial Intelligence for Development and the attempt to use quantitative approaches and “big data” sources to understand more about the dynamics of development (e.g. market movements, migration, and more) as they occur, or at least shortly afterwards. Mobile phone usage being one of the data sets that are becoming more available in many locations around the world. I think this is fascinating stuff, but it is in stark contrast with my experience of the average development project, where there is little in the way of readily available big data that is or could be used for project management and wider lesson learning. Where there is survey data it is rarely publicly available, although the open data and transparency movements are starting to have some effect.
On the more positive side, where data is available, there are new “big data” approaches that agencies can use and adapt. There is now an array of data mining methods that can be used to inductively find patterns (clusters and associations) in data sets, some of which are free and open source (See Rapid Miner). While these searches can be informed by prior theories, they are not necessarily locked in by them – they are open to discovery of unexpected patterns and surprise. Whereas the average ToC is a relatively small and linear construct, data mining software can quickly and systematically explore relationships within much larger sets of attributes/measures describing the interventions, their targets and their wider context.
Some of the complexity science concepts described in the book provide limited added value, in my view. For example, the idea of a fitness landscape, which comes from evolutionary theory. Some of its proposed use, as in chapter 17, is almost a self caricature: “Implementers first need to establish the overall space of possiblities for a given project, programme or policy, then ‘dynamically crawl the design space by simultaneously trying out design alternatives and then adapting the project sequentially based on the results” (Pritchett et al). On the other hand, there were some ideas I would definitely like to follow up on, most notably agent based modelling, especially participatory based modeling (pages 175-80, 283-95). Simulations are evaluable, in two ways: by analysis of fit with historic data and accuracy of predictions of future data points. But they do require data, and that perhaps is an issue that could be explored a bit more. When facing uncertain futures and when using a portfolio of strategies to cope with that uncertainty a lot more data is needed than when pursuing a single intervention in a more stable and predictable environment. [end of ramble :-)