On the usefulness of deliberate (but bounded) randomness in decision making

 

An introduction

In many spheres of human activity, relevant information may be hard to find, and it may be of variable quality. Human capacities to objectively assess that information may also be limited and variable. Extreme cases may be easy to assess e.g projects or research that is definitely worth/not worth funding or papers that are definitely worth/not worth publishing. But in between these extremes there may be substantial uncertainty and thus room for tacit assumptions and unrecognised biases to influence judgements.  In some fields the size of this zone of uncertainty may be quite big (see Adam, 2019 below), so the consequences at stake can be substantial. This is the territory where a number of recent papers have argued that an explicitly random decision making process may be the best approach to take.

After you have scanned the references below, continue on to some musings about implications for how we think about complexity

The literature (a sample)

  • Osterloh, M., & Frey, B. S. (2020, March 9). To ensure the quality of peer reviewed research introduce randomness. Impact of Social Sciences. https://blogs.lse.ac.uk/impactofsocialsciences/2020/03/09/to-ensure-the-quality-of-peer-reviewed-research-introduce-randomness/  
    • Why random selection of contributions to which the referees do not agree? This procedure reduces the “conservative bias”, i.e. the bias against unconventional ideas. Where there is uncertainty over the quality of a contribution, referees have little evidence to draw on in order to make accurate evaluations. However, unconventional ideas may well yield high returns in the future. Under these circumstances a randomised choice among the unorthodox contributions is advantageous.
    • …two [possible] types of error: type I errors (“reject errors”) implying that a correct hypothesis is rejected, and type 2 errors implying that a false hypothesis is accepted (“accept errors”). The former matters more than the latter. “Reject errors” stop promising new ideas, sometimes for a long time, while “accept errors” lead to a waste of money, but may be detected soon once published. This is the reason why it is more difficult to identify “reject errors” than “accept errors”. Through randomisation the risks of “reject errors” are diversified.
  • Osterloh, M., & Frey, B. S. (2020). How to avoid borrowed plumes in academia. Research Policy, 49(1), 103831. https://doi.org/10.1016/j.respol.2019.103831 Abstract: Publications in top journals today have a powerful influence on ac
  • Liu, M., Choy, V., Clarke, P., Barnett, A., Blakely, T., & Pomeroy, L. (2020). The acceptability of using a lottery to allocate research funding: A survey of applicants. Research Integrity and Peer Review, 5(1), 3. https://doi.org/10.1186/s41073-019-0089-z
    • Background: The Health Research Council of New Zealand is the first major government funding agency to use a lottery to allocate research funding for their Explorer Grant scheme. …  the Health Research Council of New Zealand wanted to hear from applicants about the acceptability of the randomisation process and anonymity of applicants.   The survey asked about the acceptability of using a lottery and if the lottery meant researchers took a different approach to their application. Results:… There was agreement that randomisation is an acceptable method for allocating Explorer Grant funds with 63% (n = 79) in favour and 25% (n = 32) against. There was less support for allocating funds randomly for other grant types with only 40% (n = 50) in favour and 37% (n = 46) against. Support for a lottery was higher amongst those that had won funding. Multiple respondents stated that they supported a lottery when ineligible applications had been excluded and outstanding applications funded, so that the remaining applications were truly equal. Most applicants reported that the lottery did not change the time they spent preparing their application. Conclusions: The Health Research Council’s experience through the Explorer Grant scheme supports further uptake of a modified lottery.
  • Roumbanis, L. (2019). Peer Review or Lottery? A Critical Analysis of Two Different Forms of Decision-making Mechanisms for Allocation of Research Grants. Science, Technology, & Human Values44(6), 994–1019. https://doi.org/10.1177/0162243918822744  
  • Adam, D. (2019). Science funders gamble on grant lotteries.A growing number of research agencies are assigning money randomly. Nature, 575(7784), 574–575. https://doi.org/10.1038/d41586-019-03572-7
    • ….says that existing selection processes are inefficient. Scientists have to prepare lengthy applications, many of which are never funded, and assessment panels spend most of their time sorting out the specific order in which to place mid-ranking ideas. Low­ and high­ quality applications are easy to rank, she says. “But most applications are in the midfield, which is very big
    • The fund tells applicants how far they got in the process, and feedback from them has been positive, he says. “Those that got into the ballot and miss out don’t feel as disappointed. They know they were good enough to get funded and take it as the luck of the draw.”
  • Fang, F. C., & Casadevall, A. (2016). Research Funding: The Case for a Modified Lottery. MBio, 7(2). https://doi.org/10.1128/mBio.00422-16
    • ABSTRACT The time-honored mechanism of allocating funds based on ranking of proposals by scienti?c peer review is no longer effective, because review panels cannot accurately stratify proposals to identify the most meritorious ones. Bias has a major in?uence on funding decisions, and the impact of reviewer bias is magni?ed by low funding paylines. Despite more than a decade of funding crisis, there has been no fundamental reform in the mechanism for funding research. This essay explores the idea of awarding research funds on the basis of a modi?ed lottery in which peer review is used to identify the most meritorious proposals, from which funded applications are selected by lottery. We suggest that a modi?ed lottery for research fund allocation would have many advantages over the current system, including reducing bias and improving grantee diversity with regard to seniority, race, and gender.
    • See also: Casadevall, F. C. F. A. (2014, April 14). Taking the Powerball Approach to Funding Medical Research. Wall Street Journal. https://online.wsj.com/article/SB10001424052702303532704579477530153771424.html
  • Stone, P. (2011). The Luck of the Draw: The Role of Lotteries in Decision Making. In The Luck of the Draw: The Role of Lotteries in Decision Making. https://doi.org/10.1093/acprof:oso/9780199756100.001.0001
    • From the earliest times, people have used lotteries to make decisions–by drawing straws, tossing coins, picking names out of hats, and so on. We use lotteries to place citizens on juries, draft men into armies, assign students to schools, and even on very rare occasions, select lifeboat survivors to be eaten. Lotteries make a great deal of sense in all of these cases, and yet there is something absurd about them. Largely, this is because lottery-based decisions are not based upon reasons. In fact, lotteries actively prevent reason from playing a role in decision making at all. Over the years, people have devoted considerable effort to solving this paradox and thinking about the legitimacy of lotteries as a whole. However, these scholars have mainly focused on lotteries on a case-by-case basis, not as a part of a comprehensive political theory of lotteries. In The Luck of the Draw, Peter Stone surveys the variety of arguments proffered for and against lotteries and argues that they only have one true effect relevant to decision making: the “sanitizing effect” of preventing decisions from being made on the basis of reasons. While this rationale might sound strange to us, Stone contends that in many instances, it is vital that decisions be made without the use of reasons. By developing innovative principles for the use of lottery-based decision making, Stone lays a foundation for understanding when it is–and when it is not–appropriate to draw lots when making political decisions both large and small

Randomness in other species

  • Drew, L. (2020). Random Search Wired Into Animals May Help Them Hunt. Quanta Magazine. Retrieved 2 February 2021, from https://www.quantamagazine.org/random-search-wired-into-animals-may-help-them-hunt-20200611/
    • Of special interest here is the description of  Levy walks, a variety of randomised movement where the frequency  distribution of distances moved has one long tail. Levy walks have been the subject of exploration across multiple disciples, as seen in…
  • Reynolds, A. M. (2018). Current status and future directions of Lévy walk research. Biology Open, 7(1). https://doi.org/10.1242/bio.030106
    • Levy walks are specialised forms of random walks composed of clusters of multiple short steps with longer steps between them…. They are particularly advantageous when searching in uncertain or dynamic environments where the spatial scales of searching patterns cannot be tuned to target distributions…Nature repeatedly reveals the limits of our imagination. Lévy walks once thought to be the preserve of probabilistic foragers have now been identified in the movement patterns of human hunter-gatherers
Levy walk random versus Brownian motion random movement

Implications for thinking about complexity

Uncertainty of future states is a common characteristic of many complex systems, though not unique to these.  One strategy that human organisations can use to deal with uncertainty is to build up capital reserves, thus enhancing longer term resilience albeit at the cost of more immediate efficiencies. From the first set of papers referenced above, it seems like the deliberate and bounded use of randomness could provide a useful second option. The work being done on Levy walks also suggests that there are interesting variations on randomisation that should be explored.  It is already the case the designers of search/opitimisation algorithms have headed this way. If you are interested, you can read further on the subject of what are called  “Levy Flight ” algorithms.

On a more light hearted note, I would be interested to hear from the Cynefin school on how comfortable they would be marketing this approach to “managing” uncertainty to the managers and leaders they seem keen to engage with.

Another thought…years ago I did an analysis of data that had been collected on development projects that had been funded by the then DFID’s funded Civil Society Challenge Fund. This included data on project proposals, proposal assessments, and project outcomes. I used Rapid Miner Studio’s Decision Tree  module to develop predictive models of achievement ratings of the funded projects. Somewhat disappointingly, I failed to identify any attributes of project proposals, or how they had been initially assessed, which were good predictors of the subsequent performance of those projects. There are number of possible reasons why this might so. One of which may be the scale of the uncertainty gap between the evident likely failures and the evident likely successes. Various biases may have skewed judgements within this zone in a way that undermined the longer term predictive use of the proposal screening and approval process. Somewhat paradoxically, if instead a lottery mechanism had been used for selecting fundable proposals in the uncertainty zone this may well have led to the approval process being a better predictor eventual project performance.

Postscript: Subsequent finds…

  •  The Powerball Revolution. By Malcom Gladwell (n.d.). Revisionist History Season 5 Episode 3. Retrieved 7 April 2021, from http://revisionisthistory.com/episodes/44-the-powerball-revolution
    • On school student council lotteries in Bolivia
      • “Running for an office” and “Running an office” can be two very different things. Lotteries diminish the former and put the focus on the latter
      • “Its a more diverse group” that end up on the council, compared to those selected via election
      • “Nobody knows anything” -initial impressions of capacity are often not good predictors of leadership capacity. Contra assumption that voters can be good predictors of capacity in office.
    • Medical research grant review and selection
      • Review scores of proposals are poor predictors of influential and innovative research (based on citation analysis), but has been in use for decades.
    • A boarding school in New Jersey

 

Mapping the Standards of Evidence used in UK social policy.

Puttick, R. (2018). Mapping the Standards of Evidence used in UK social policy. Alliance for Useful Evidence.
.
“Our analysis focuses on 18 frameworks used by 16 UK organisations for judging evidence used in UK domestic social policy which are relevant to government, charities, and public service providers.
.
In summary:
• There has been a rapid proliferation of standards of evidence and other evidence frameworks since 2000. This is a very positive development and reflects the increasing sophistication of how evidence is generated and used in social policy.
• There are common principles underpinning them, particularly the shared goal of improving decision-making, but they often ask different questions, are engaging different audiences, generate different content, and have varying uses. This variance reflects the host organisation’s goals, which can be to inform its funding decisions, to make recommendations to the wider field, or to provide a resource for providers to help them evaluate.
• It may be expected that all evidence frameworks assess whether an intervention is working, but this is not always the case, with some frameworks assessing the quality of evidence, not the success of the intervention itself.
• The differences between the standards of evidence are often for practical reasons and reflect the host organisation’s goals. However, there is a need to consider more philosophical and theoretical tensions about what constitutes good evidence. We identified examples of different organisations reaching different conclusions about the same intervention; one thought it worked well, and the other was less confident. This is a problem: Who is right? Does the intervention work, or not? As the field develops, it is crucial that confusion and disagreement is minimised.
• One suggested response to minimise confusion is to develop a single set of standards of evidence. Although this sounds inherently sensible, our research has identified several major challenges which would need to be overcome to achieve this.
• We propose that the creation of a single set of standards of evidence is considered in greater depth through engagement with both those using standards of evidence, and those being assessed against them. This engagement would also help share learning and insights to ensure that standards of evidence are effectively achieving their goals.

Computational Modelling: Technological Futures

Council for Science and Technology & Government Office for Science, 2020. Available as pdf

Not the most thrilling/enticing title, but differently of interest. Chapter  3 provides a good overview of different ways of building models. Well worth a read, and definitely readable.

Recommendation 2: Decision-makers need to be intelligent customers for models, and those that supply models should provide appropriate
guidance to model users to support proper use and interpretation. This includes providing suitable model documentation detailing the model purpose, assumptions, sensitivities, and limitations, and evidence of appropriate quality assurance.


Chapters 1-3

The Alignment Problem: Machine Learning and Human Values

By Brian Christian. 334 pages. 2020 Norton. Author’s web page here

Brian Christian talking about his book on YouTube

RD comment: This is one of the most interesting and informative books I have read in the last few years. Totally relevant for evaluators thinking about the present and about future trends

Releasing the power of digital data for development. A guide to new opportunities

Releasing the power of digital data for development: A guide to new opportunities. (2020). Frontier Technologies, UKAID, NIRAS.
Contents

Section 1  Executive Summary
Section 2 Introduction
Section 3 Understanding and navigating the new data landscape
Section 4  What is needed to release the new potential?
Section 5  Further considerations
Appendix 1: Data opportunities potentially useful now in testing  environments
Appendix 2: Bibliography and further reading
Appendix 3: Methodological notes

Executive Summary

There are 8 conclusions we discuss in this report.

1. There is justified excitement and proven benefits in the use of new digital data sources, particularly where timeliness of data is important or there are persistent gaps in traditional data sources.  This might include data from fragile and conflict-affected states, data supporting decision-making about marginalised population groups, or in finding solutions to address persistent ethical issues where traditional sources have not proved adequate.

2. In many cases, improvements in and greater access to traditional data sources could be more effective than just new data alone, including developing traditional data in tandem with new data sources. This includes innovations in digitising traditional data sources, supporting the sharing of data between and within organisations, and integrating the use of new data sources with traditional data.

3. Decision-making around the use of new data sources should be highly devolved by empowering individual staff and be focused on multiple dimensions of data quality, not least because there are no “one size fits all” rules that determine how new digital data sources fit to specific needs, subject matters or geographies. This could be supported by ensuring:
a. Research, innovation, and technical support are highly demand-led, driven by specific data user needs in specific contexts; and
b. Staff have accessible guidance that demystifies the complexities of new data sources, clarifies the benefits and risks that need to be managed, and allows them to be ‘data brokers’ confident in navigating the new data landscape, innovating in it, and coordinating the technical expertise of others.

The main report includes a description of the evidence and conclusions in a way that supports these aims, including a set of guides for staff about the most promising new data sources.

4. Where traditional data sources are failing to provide the detailed data needed, most new data sources provide a potential route to helping with the Agenda 2030 goal to ‘leave no-one behind,’ as often they can provide additional granularity on population sub-groups.  But, to avoid harming the interests of marginalised groups, strong ethical frameworks are needed, and affected people should be involved in decisionmaking about how data is processed and used. Action is also required to ensure strong data protection environments according to each type of new data and the contexts of its use.

5. New data sources with the highest potential added value for exploitation now, especially when combined with each other or traditional data sources, were found to be:
a. data from Earth Observation (EO) platforms (including satellites and drones)
b. passive location data from mobile phones

6. While there are specific limitations and risks in different circumstances, each of these data sources provides for significant gains in certain dimensions of data quality compared to some traditional sources and other new data sources. The use of Artificial Intelligence (AI) techniques, such as through machine learning, has high potential to add value to digital datasets in terms of improving aspects of data quality from many different sources, such as social media data, and particularly with large complex datasets and across multiple data sources.

7. Beyond the current time horizon, the most potential for emerging data sources is likely to come from:
• The next generation of Artificial Intelligence
• The next generation of Earth Observation platforms
• Privacy Preserving Data Sharing (PPDS) via the Cloud and
• the Internet of Things (IoT).
No significant other data sources, technologies or techniques were found with high potential to benefit FCDO’s work, which seems to be in line with its current research agenda and innovative activities. Some longer-term data prospects have been identified and these could be monitored to observe increases in their potential in the future.

8. Several other factors are relevant to the optimal use of digital data sources which should be investigated and/or work in these areas maintained. These include important internal and external corporate developments, importantly including continued support to Open Data/ data sharing and enhanced data security systems to underpin it, learning across disciplinary boundaries with official statistics principles at the core, and continued support to capacity-building of national statistical systems in developing countries in traditional data and data innovation.

Brian Castellani’s Map of the Complexity Sciences

I have limited tolerance for “complexity babble” That is, people talking about complexity in abstract and ungrounded, and in effect, practically inconsequential terms. Also in ways that give no acknowledgement to the surrounding history of ideas.

So, I really appreciate the work Brian has put into his “Map of the Complexity Sciences” produced in 2018. And thought it deserves wider circulation. Note that this is one of a number of iterations and more iterations are likely in the future. Click on the image to go to a bigger copy.

And please note: when you get taken to the bigger copy and when you click on any node a hypertext link there, this will take you to another web page providing detailed information about that concept or person. A lot of work has gone into the construction of this map, which deserves recognition.

Here is a discussion of an earlier iteration: https://www.theoryculturesociety.org/brian-castellani-on-the-complexity-sciences/

Linked Democracy Foundations, Tools, and Applications

Poblet, Marta, Pompeu Casanovas, and Víctor Rodríguez-Doncel. 2019. Linked Democracy: Foundations, Tools, and Applications. SpringerBriefs in Law. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-13363-4. Available in PDF form online

“It is only by mobilizing knowledge that is widely dispersed
across a genuinely diverse community that a free society can
hope to outperform its rivals while remaining true to its
values”

(Ober 2008, 5) cited on page v

Chapter 1 Introduction to Linked Data Abstract This chapter presents Linked Data, a new form of distributed data on the web which is especially suitable to be manipulated by machines and to share knowledge. By adopting the linked data publication paradigm, anybody can publish data on the web, relate it to data resources published by others and run artificial intelligence algorithms in a smooth manner. Open linked data resources may democratize the future access to knowledge by the mass of internet users, either directly or mediated through algorithms. Governments have enthusiastically adopted these ideas, which is in harmony with the broader open data movement.

Chapter 2 Deliberative and Epistemic Approaches to Democracy Abstract Deliberative and epistemic approaches to democracy are two important dimensions of contemporary democratic theory. This chapter studies these dimensions in the emerging ecosystem of civic and political participation tools, and appraises their collective value in a new distinct concept: linked democracy. Linked democracy is the distributed, technology-supported collective decision-making process, where data, information and knowledge are connected and shared by citizens online. Innovation and learning are two key elements of Athenian democracies which can be facilitated by the new digital technologies, and a cross-disciplinary research involving computational scientists and democratic theorists can lead to new theoretical insights of democracy

Chapter 3 Multilayered Linked Democracy An infinite amount of knowledge is waiting to be unearthed. —Hess and Ostrom (2007) Abstract Although confidence in democracy to tackle societal problems is falling, new civic participation tools are appearing supported by modern ICT technologies. These tools implicitly assume different views on democracy and citizenship which have not been fully analysed, but their main fault is their isolated operation in non-communicated silos. We can conceive public knowledge, like in Karl Popper’s World 3, as distributed and connected in different layers and by different connectors, much as it happens with the information in the web or the data in the linked data cloud. The interaction between people, technology and data is still to be defined before alternative institutions are founded, but the so called linked democracy should rest on different layers of interaction: linked data, linked platforms and linked ecosystems; a robust connectivity between democratic institutions is fundamental in order to enhance the way knowledge circulates and collective decisions are made.

Chapter 4 Towards a Linked Democracy Model Abstract In this chapter we lay out the properties of participatory ecosystems as linked democracy ecosystems. The goal is to provide a conceptual roadmap that helps us to ground the theoretical foundations for a meso-level, institutional theory of democracy. The identification of the basic properties of a linked democracy eco-system draws from different empirical examples that, to some extent, exhibit some of these properties. We then correlate these properties with Ostrom’s design principles for the management of common-pool resources (as generalised to groups cooperating and coordinating to achieve shared goals) to open up the question of how linked democracy ecosystems can be governed

Chapter 5 Legal Linked Data Ecosystems and the Rule of Law Abstract This chapter introduces the notions of meta-rule of law and socio-legal ecosystems to both foster and regulate linked democracy. It explores the way of stimulating innovative regulations and building a regulatory quadrant for the rule of law. The chapter summarises briefly (i) the notions of responsive, better and smart regulation; (ii) requirements for legal interchange languages (legal interoperability); (iii) and cognitive ecology approaches. It shows how the protections of the substantive rule of law can be embedded into the semantic languages of the web of data and reflects on the conditions that make possible their enactment and implementation as a socio-legal ecosystem. The chapter suggests in the end a reusable multi-levelled meta-model and four notions of legal validity: positive, composite, formal, and ecological.

Chapter 6 Conclusion Communication technologies have permeated almost every aspect of modern life, shaping a densely connected society where information flows follow complex patterns on a worldwide scale. The World Wide Web created a global space of information, with its network of documents linked through hyperlinks. And a new network is woven, the Web of Data, with linked machine-readable data resources that enable new forms of computation and more solidly grounded knowledge. Parliamentary debates, legislation, information on political parties or political programs are starting to be offered as linked data in rhizomatic structures, creating new opportunities for electronic government, electronic democracy or political deliberation. Nobody could foresee that individuals, corporations and government institutions alike would participate …(continues)

Participatory modelling and mental models

These are the topics covered by two papers I have come across today, courtesy of Peter Barbrook-Johnson, of Surrey University. Both papers provide good overviews of their respective fields.

Moon, K., Adams, V. M., Dickinson, H., Guerrero, A. M., Biggs, D., Craven, L., … Ross, H. (2019). Mental models for conservation research and practice. Conservation Letters, 1–11.

Abstract: Conservation practice requires an understanding of complex social-ecological processes of a system and the different meanings and values that people attach to them. Mental models research offers a suite of methods that can be used to reveal these understandings and how they might affect conservation outcomes. Mental models are representations in people’s minds of how parts of the world work. We seek to demonstrate their value to conservation and assist practitioners and researchers in navigating the choices of methods available to elicit them. We begin by explaining some of the dominant applications of mental models in conservation: revealing individual assumptions about a system, developing a stakeholder-based model of the system, and creating a shared pathway to conservation. We then provide a framework to “walk through” the stepwise decisions in mental models research, with a focus on diagram based methods. Finally, we discuss some of the limitations of mental models research and application that are important to consider. This work extends the use of mental models research in improving our ability to understand social-ecological systems, creating a powerful set of tools to inform and shape conservation initiatives.

PDF copy here

Voinov, A. (2018). Tools and methods in participatory modeling: Selecting the right tool for the job. Environmental Modelling and Software, 109, 232–255.

Abstract: Various tools and methods are used in participatory modelling, at di?erent stages of the process and for di?erent purposes. The diversity of tools and methods can create challenges for stakeholders and modelers when selecting the ones most appropriate for their projects. We o?er a systematic overview, assessment, and categorization of methods to assist modelers and stakeholders with their choices and decisions. Most available literature provides little justi?cation or information on the reasons for the use of particular methods or tools in a given study. In most of the cases, it seems that the prior experience and skills of the modelers had a dominant e?ect on the selection of the methods used. While we have not found any real evidence of this approach being wrong, we do think that putting more thought into the method selection process and choosing the most appropriate method for the project can produce better results. Based
on expert opinion and a survey of modelers engaged in participatory processes, we o?er practical guidelines to improve decisions about method selection at di?erent stages of the participatory modeling process

PDF copy here

Subjective measures in humanitarian analysis

A note for ACAPS, by Aldo Benini, A. (2018). PDF available at https://www.acaps.org/sites/acaps/files/resources/files/20180115_acaps_technical_note_subjective_measures_full_report.pdf

Purpose and motivation

This note seeks to sensitize analysts to the growing momentum of subjective methods and measures around, and eventually inside, the humanitarian field. It clarifies the nature of subjective measures and their place in humanitarian needs assessments. It weighs their strengths and challenges. It discusses, in considerable depth, a small number of instruments and methods that are ready, or have good potential, for humanitarian analysis.

Post World War II culture and society have seen an acceleration of subjectivity in all institutional realms, although at variable paces. The sciences responded with considerable lag. They have created new methodologies – “mixed methods” (quantitative and qualitative), “subjective measures”, self-assessments of all kinds – that claim an equal playing field with distant, mechanical objectivity. For the period 2000-2012, using the search term “subjective measure”, Google Scholar returns around 600 references per year; for the period 2013 – fall 2017, the figure quintuples to 3,000. Since 2012, the United Nations has been publishing the annual World Happiness Report; its first edition discusses validity and reliability of subjective measures at length.

Closer to the humanitarian domain, poverty measurement has increasingly appreciated subjective data. Humanitarian analysis is at the initial stages of feeling the change. Adding “AND humanitarian” to the above search term produces 8 references per year for the first period, and 40 for the second – a trickle, but undeniably an increase. Other searches confirm the intuition that something is happening below the surface; for instance, “mixed method  AND humanitarian” returns 110 per year in the first, and 640 in the second period – a growth similar to that of “subjective measures”.

Still in some quarters subjectivity remains suspect. Language matters. Some collaborations on subjective measures have preferred billing them as “experience-based measures”. Who doubts experience? It is good salesmanship, but we stay with “subjective” unless the official name of the measure contains “experience”.

What follows 

We proceed as follows: In the foundational part, we discuss the nature of, motivation for, and reservations against, subjective measures. We provide illustrations from poverty measurement and from food insecurity studies. In the second part, we present three tools – scales, vignettes and hypothetical questions – with generic pointers as well as with specific case studies. We conclude with recommendations and by noting instruments that we have not covered, but which are likely to grow more important in years to come

Rick Davies comment: High recommended!

PRISM: TOOLKIT FOR EVALUATING THE OUTCOMES AND IMPACTS ?OF SMALL/MEDIUM-SIZED CONSERVATION PROJECTS

WHAT IS PRISM?

PRISM is a toolkit that aims to support small/medium-sized conservation projects to effectively evaluate the outcomes and impacts of their work.

The toolkit has been developed by a collaboration of several conservation NGOs with additional input from scientists and practitioners from across the conservation sector.

The toolkit is divided into four main sections:

Introduction and Key Concepts: Provides a basic overview of the theory behind evaluation relevant to small/medium-sized conservation projects

Designing and Implementing the Evaluation: Guides users through a simple, step by step process for evaluating project outcomes and impacts, including identifying what you need to evaluate, how to collect evaluation data, analysing/interpreting results and deciding what to do next.

Modules: Provides users with additional guidance and directs users towards methods for evaluating outcomes/impacts resulting from five different kinds of conservation action:

  • Awareness and Attitudes
  • Capacity Development
  • Livelihoods and Governance
  • Policy
  • Species and Habitat Management

Method factsheets: Outlines over 60 practical, easy to use methods and supplementary guidance factsheets for collecting, analysing and interpreting evaluation data

Toolkit Website: https://conservationevaluation.org/
PDF copy of manual- Download request form: https://conservationevaluation.org/download/