Five ways to ensure that models serve society: A manifesto

Saltelli, A., Bammer, G., Bruno, I., Charters, E., Fiore, M. D., Didier, E., Espeland, W. N., Kay, J., Piano, S. L., Mayo, D., Jr, R. P., Portaluri, T., Porter, T. M., Puy, A., Rafols, I., Ravetz, J. R., Reinert, E., Sarewitz, D., Stark, P. B., … Vineis, P. (2020). Five ways to ensure that models serve society: A manifesto. Nature, 582(7813), 482–484. https://doi.org/10.1038/d41586-020-01812-9

The five ways:

    1. Mind the assumptions
      • “One way to mitigate these issues is to perform global uncertainty and sensitivity analyses. In practice, that means allowing all that is uncertain — variables, mathematical relationships and boundary conditions — to vary simultaneously as runs of the model produce its range of predictions. This often reveals that the uncertainty in predictions is substantially larger than originally asserted”
    2. Mind the hubris
      • Most modellers are aware that there is a tradeoff between the usefulness of a model and the breadth it tries to capture. But many are seduced by the idea of adding complexity in an attempt to capture reality more accurately. As modellers incorporate more phenomena, a model might fit better to the training data, but at a cost. Its predictions typically become less
    3. Mind the framing
      • “Match purpose and context. Results from models will at least partly reflect the interests, disciplinary orientations and biases of the developers. No one model can serve all purposes. accurate”
    4. Mind the consequences
      • Quantification can backfire. Excessive regard for producing numbers can push a discipline away from being roughly right towards being precisely wrong. Undiscriminating use of statistical tests can substitute for sound judgement. By helping to make risky financial products seem safe, models contributed to derailing the global economy in 2007–08 (ref. 5).”
    5. Mind the unknowns
      • Acknowledge ignorance. For most of the history of Western philosophy, self-awareness of ignorance was considered a virtue, the worthy object of intellectual pursuit”

“Ignore the five, and model predictions become Trojan horses for unstated
interests and values”

“Models’ assumptions and limitations must be appraised openly and honestly. Process and ethics matter as much as intellectual prowess”

“Mathematical models are a great way to explore questions. They are also a dangerous way to assert answers. Asking models for  certainty or consensus is more a sign of the  difficulties in making controversial decisions  than it is a solution, and can invite ritualistic use of quantification”

A broken system – why literature searching needs a FAIR revolution

Gusenbauer, Michael, and Neal R. Haddaway. ‘Which Academic Search Systems Are Suitable for Systematic Reviews or Meta-Analyses? Evaluating Retrieval Qualities of Google Scholar, PubMed, and 26 Other Resources’. Research Synthesis Methods,2019.

Haddaway, Neal, and Michael Gusenbauer. 2020. ‘A Broken System – Why Literature Searching Needs a FAIR Revolution’. LSE (blog). 3 February 2020.

“….searches on Google Scholar are neither reproducible, nor transparent.  Repeated searches often retrieve different results and users cannot specify detailed search queries, leaving it to the system to interpret what the user wants.

However, systematic reviews in particular need to use rigorous, scientific methods in their quest for research evidence. Searches for articles must be as objective, reproducible and transparent as possible. With systems like Google Scholar, searches are not reproducible – a central tenet of the scientific method. 

Specifically, we believe there is a very real need to drastically overhaul how we discover research, driven by the same ethos as in the Open Science movement. The FAIR data principles offer an excellent set of criteria that search system providers can adapt to make their search systems more adequate for scientific search, not just for systematic searching, but also in day-to-day research discovery:

  • Findable: Databases should be transparent in how search queries are interpreted and in the way they select and rank relevant records. With this transparency researchers should be able choose fit-for-purpose databases clearly based on their merits.
  • Accessible: Databases should be free-to-use for research discovery (detailed analysis or visualisation could require payment). This way researchers can access all knowledge available via search.
  • Interoperable: Search results should be readily exportable in bulk for integration into evidence synthesis and citation network analysis (similar to the concept of ‘research weaving’ proposed by Shinichi Nakagawa and colleagues). Standardised export formats help analysis across databases.
  • Reusable: Citation information (including abstracts) should not be restricted by copyright to permit reuse/publication of summaries/text analysis etc.

Rick Davies comment: I highly recommend using Lens.org, a search facility mentioned in the second paper above.

Predict science to improve science

DellaVigna, Stefano, Devin Pope Vivalt, and Eva Vivalt. 2019. Predict Science to Improve Science’. Science 366 (6464): 428–29.

Selected quotes follow:

The limited attention paid to predictions of research results stands in
contrast to a vast literature in the social sciences exploring people’s
ability to make predictions in general

We stress three main motivations for a more systematic collection of predictions of research results. 1. The nature of scientific progress. A new result builds on the consensus, or lack thereof, in an area and is often evaluated for how surprising, or not, it is. In turn, the novel result will lead to an updating of views. Yet we do not have a systematic procedure to capture the scientific views prior to a study, nor the updating that takes place afterward.

2. A second benefit of collecting predictions is that they can not only reveal when results are an important departure from expectations of the research community and improve the interpretation of research results, but they can also potentially help to mitigate publication bias. It is not uncommon for research findings to be met by claims that they are not surprising. This may be particularly true when researchers find null results, which are rarely published even when authors have used rigorous methods to answer important questions (15). However, if priors are collected before carrying out a study, the results can be compared to the average expert prediction, rather than to the null hypothesis of no effect. This would allow researchers to confirm that some results were unexpected, potentially making them more interesting and informative because they indicate rejection of a prior held by the research community; this could contribute to alleviating publication bias against null results.


3. A third benefit of collecting predictions systematically is that it makes it possible to improve the accuracy of predictions. In turn, this may help with experimental design. For example, envision a behavioral research team consulted to help a city recruit a more diverse police department. The team has a dozen ideas for reaching out to minority applicants, but the sample size allows for only three treatments to be tested with adequate statistical power. Fortunately, the team has recorded forecasts for several years, keeping track of predictive accuracy, and they have learned that they can combine team members’ predictions, giving more weight to “superforecasters” (9). Informed by its longitudinal data on forecasts, the team can elicit predictions for each potential project and weed out those interventions judged to have a low chance of success or focus on those interventions with a higher value of information. In addition, the research results of those projects that did go forward would be more impactful if accompanied by predictions that allow better interpretation of results in light of the conventional wisdom.

Rick Davies comment: I have argued, for years, that evaluators should start by eliciting client, and other stakeholders, predictions of outcomes of interest that the evaluation might uncover (e.g. Bangladesh, 2004). But I can’t think of any instance where my efforts have been successful, yet. But I have an upcoming opportunity and will try once again, perhaps armed with these two papers.

See also Stefano DellaVigna, and Devin Pope. 2016.‘Predicting Experimental Results: Who Knows What? NATIONAL BUREAU OF ECONOMIC RESEARCH.

ABSTRACT
Academic experts frequently recommend policies and treatments. But how well do they anticipate the impact of different treatments? And how do their predictions compare to the predictions of non-experts? We analyze how 208 experts forecast the results of 15 treatments involving monetary and non-monetary motivators in a real-effort task. We compare these forecasts to those made by PhD students and non-experts: undergraduates, MBAs, and an online sample. We document seven main results. First, the average forecast of experts predicts quite well the experimental results. Second, there is a strong wisdom-of-crowds effect: the average forecast outperforms 96 per cent of individual forecasts. Third, correlates of expertise—citations, academic rank, field, and contextual experience–do not improve forecasting accuracy. Fourth, experts as a group do better than non-experts, but not if accuracy is defined as rank-ordering treatments. Fifth, measures of effort, confidence, and revealed ability are predictive of forecast accuracy to some extent, especially for non-experts. Sixth, using these measures we identify `superforecasters’ among the non-experts who outperform the experts out of sample. Seventh, we document that these results on forecasting accuracy surprise the forecasters themselves. We present a simple model that organizes several of these results and we stress the implications for the collection of forecasts of future experimental results.

See also: The Social Science Prediction Platform, developed by the same authors.

Twitter responses to this post:

Howard White@HowardNWhite Ask decision-makers what they expect research findings to be before you conduct the research to help assess the impact of the research. Thanks to @MandE_NEWS for the pointer. https://socialscienceprediction.org

Marc Winokur@marc_winokur Replying to @HowardNWhite and @MandE_NEWS For our RCT of DR in CO, the child welfare decision makers expected a “no harm” finding for safety, while other stakeholders expected kids to be less safe. When we found no difference in safety outcomes, but improvements in family engagement, the research impact was more accepted

Nature editorial: “Tell it like it is”

22 January 2020. Aimed at researchers, but equally relevant to evaluators. Quoted in full below, available online here. Bold highlighting is mine

Every research paper tells a story, but the pressure to provide ‘clean’ narratives is harmful to the scientific endeavour. Research manuscripts provide an account of how their authors addressed a research question or questions, the means they used to do so, what they found and how the work (dis)confirms existing hypotheses or generates new ones. The current research culture is characterized by significant pressure to present research projects as conclusive narratives that leave no room for ambiguity or for conflicting or inconclusive results. The pressure to produce such clean narratives, however, represents a significant threat to validity and runs counter to the reality of what science looks like.

Prioritizing conclusive over transparent research narratives incentivizes a host of questionable research practices: hypothesizing after the results are known, selectively reporting only those outcomes that confirm the original predictions or excluding from the research report studies that provide contradictory or messy results. Each of these practices damages credibility and presents a distorted picture of the research that prevents cumulative knowledge.

During peer review, reviewers may occasionally suggest that the authors ‘reframe’ the reported work. While this is not problematic for exploratory research, it is inappropriate for confirmatory research—that is, research that tests pre-existing hypotheses. Altering the hypotheses or predictions of confirmatory research after the fact invalidates inference and renders the research fundamentally unreliable. Although these reframing suggestions are made in good faith, we will always overrule them, asking authors to present their hypotheses and predictions as originally intended.

Preregistration is being increasingly adopted across different fields as a means of preventing questionable research practices and increasing transparency. As a journal, we strongly support the preregistration of confirmatory research (and currently mandate registration for clinical trials). However, preregistration has little value if authors fail to abide by it or do not transparently report whether their project differs from what they preregistered and why. We ask that authors provide links to their preregistrations, specify the date of preregistration and transparently report any deviations from the original protocol in their manuscripts.

There is occasionally valid reason to deviate from the preregistered protocol, especially if that protocol did not have the benefit of peer review before the authors carried out their research (as in Registered Reports). For instance, it sometimes becomes apparent during peer review that a preregistered analysis is inappropriate or suboptimal. For all deviations from the preregistered protocol, we ask authors to indicate in their manuscripts how they deviated from their original plan and explain their reason for doing so (e.g., flaw, suboptimality, etc.). To ensure transparency, unless a preregistered analysis plan is unquestionably flawed, we ask that authors also report the results of their preregistered analyses alongside the new analyses.

Occasionally, authors may be tempted to drop a study from their report for reasons other than poor quality (or reviewers may make that recommendation)—for instance, because the results are incompatible with other studies reported in the paper. We discourage this practice; in multistudy research papers, we ask that authors report all of the work they carried out, regardless of outcome. Authors may speculate as to why some of their work failed to confirm their hypotheses and need to appropriately caveat their conclusions, but dropping studies simply exacerbates the file-drawer problem and presents the conclusions of research as more definitive than they are.

No research project is perfect; there are always limitations that also need to be transparently reported. In 2019, we made it a requirement that all our research papers include a limitations section, in which authors explain methodological and other shortcomings and explicitly acknowledge alternative interpretations of their findings.

Science is messy, and the results of research rarely conform fully to plan or expectation. ‘Clean’ narratives are an artefact of inappropriate pressures and the culture they have generated. We strongly support authors in their efforts to be transparent about what they did and what they found, and we commit to publishing work that is robust, transparent and appropriately presented, even if it does not yield ‘clean’ narratives.?

Published online: 21 January 2020 htthttps://doi.org/10.1038/s41562-020-0818-9

Mental models for conservation research and practice


Conservation Letters, February, 2019. Katie Moon, Angela M. Guerrero, Vanessa. M. Adams, Duan Biggs, Deborah A. Blackman, Luke Craven, Helen Dickinson, Helen Ross
https://conbio.onlinelibrary.wiley.com/doi/epdf/10.1111/conl.12642

Abstract: Conservation practice requires an understanding of complex social-ecological processes of a system and the different meanings and values that people attach to them. Mental models research offers a suite of methods that can be used to reveal these understandings and how they might affect conservation outcomes. Mental models are representations in people’s minds of how parts of the world work. We seek to demonstrate their value to conservation and assist practitioners and researchers in navigating the choices of methods available to elicit them. We begin by explaining some of the dominant applications of mental models in conservation: revealing individual assumptions about a system, developing a stakeholder-based model of the system, and creating a shared pathway to conservation. We then provide a framework to “walkthrough” the stepwise decisions in mental models research, with a focus on diagram-based methods. Finally, we discuss some of the limitations of mental models research and application that are important to consider. This work extends the use of mental models research in improving our ability to understand social-ecological systems, creating a powerful set of tools to inform and shape conservation initiatives.

Our paper aims to assist researchers and practitioners to navigate the choices available in mental models research methods. The paper is structured into three sections. The first section explores some of the dominant applications and thus value of mental models for conservation research and practice. The second section provides a “walk through” of the step-wise decisions that can be useful when engaging in mental models research, with a focus on diagram-based methods. We present a framework to assist in this “walk through,” which adopts a pragmatist perspective. This perspective focuses on the most appropriate strategies to understand and resolve problems, rather than holding to a firm philosophical position (e.g., Sil & Katzenstein, 2010). The third section discusses some of the limitations of mental models research and application.

1 INTRODUCTION

2 THE ROLE FOR MENTAL MODELS I N CO N S E RVAT I O N

2.1 Revealing individual assumptions about a system

2 .2 Developing a stakeholder-based model of the system

2.3 Creating a shared pathway to conservation

3 THE TYPE OF MENTAL MODEL NEEDED

4 ELICITING OR DEVELOPING CONCEPTS AND OBJECTS

5 MODELING RELATIONSHIPS WITHIN MENTAL MODELS

5.1 Mapping qualitative relationships

5.2 Quantifying qualitative relationships

5.3 Analyzing systems based on mental models

6 COMPARING MENTAL MODELS

7 LIMITATIONS OF MENTAL MODELS RESEARCH FOR CONSERVATION POLICY AND PRACTICE

8 ADVANCING MENTAL MODELS FOR CONSERVAT I ON

Computational Modelling of Public Policy: Reflections on Practice

Gilbert G, Ahrweiler P, Barbrook-Johnson P, et al. (2018) Computational Modelling of Public Policy: Reflections on Practice. Journal of Artificial Societies and Social Simulation 21: 1–14. pdf copy available

Abstract: Computational models are increasingly being used to assist in developing, implementing and evaluating public policy. This paper reports on the experience of the authors in designing and using computational models of public policy (‘policy models’, for short). The paper considers the role of computational models in policy making, and some of the challenges that need to be overcome if policy models are to make an effective contribution. It suggests that policy models can have an important place in the policy process because they could allow policy makers to experiment in a virtual world, and have many advantages compared with randomised control trials and policy pilots. The paper then summarises some general lessons that can be extracted from the authors’ experience with policy modelling. These general lessons include the observation that ofen the main benefit of designing and using a model is that it provides an understanding of the policy domain, rather than the numbers it generates; that care needs to be taken that models are designed at an appropriate level of abstraction; that although appropriate data for calibration and validation may sometimes be in short supply, modelling is ofen still valuable; that modelling collaboratively and involving a range of stakeholders from the outset increases the likelihood that the model will be used and will be fit for purpose; that attention needs to be paid to effective communication between modellers and stakeholders; and that modelling for public policy involves ethical issues that need careful consideration. The paper concludes that policy modelling will continue to grow in importance as a component of public policy making processes, but if its potential is to be fully realised, there will need to be a melding of the cultures of computational modelling and policy making.

Selected quotes: For these reasons, the ability to make ‘point predictions’, i.e. forecasts of specific values at a specific time in the future, is rarely possible. More possible is a prediction that some event will or will not take place, or qualitative statements about the type or direction of change of values. Understanding what sort of unexpected outcomes
can emerge and something of the nature of how these arise also helps design policies that can be responsive to unexpected outcomes when they do arise. It can be particularly helpful in changing environments to use the model to explore what might happen under a range of possible, but dfferent, potential futures – without any commitment about which of these may eventually transpire. Even more valuable is a finding that the model shows that certain outcomes could not be achieved given the assumptions of the model. An example of this is the use of a whole system energy model to develop scenarios that meet the decarbonisation goals set by the EU for 2050 (see, for example, RAENG 2015.)

Rick Davies comment: A concise and very informative summary with many useful references. Definitely worth reading! I like the big emphasis on the need for ongoing collaboration and communication between model developers and their clients and other model stakeholders However, I would have liked to see some discussion of the pros and cons of different approaches to modeling e.g. agent-based models vs Fuzzy Cognitive Mapping and other approaches. Not just examples of different modelling applications, useful as they were.

See also: Uprichard, E and Penn, A (2016) Dependency Models: A CECAN Evaluation and Policy Practice Note for policy analysts and evaluators. CECAN. Available at: https://www.cecan.ac.uk/sites/default/files/2018-01/EMMA%20PPN%20v1.0.pdf (accessed 6 June 2018).

Wiki Surveys: Open and Quantifiable Social Data Collection

by Matthew J. Salganik, Karen E. C. Levy, PLOS
Published: May 20, 2015 https://doi.org/10.1371/journal.pone.0123483

Abstract: In the social sciences, there is a longstanding tension between data collection methods that facilitate quantification and those that are open to unanticipated information. Advances in technology now enable new, hybrid methods that combine some of the benefits of both approaches. Drawing inspiration from online information aggregation systems like Wikipedia and from traditional survey research, we propose a new class of research instruments called wiki surveys. Just as Wikipedia evolves over time based on contributions from participants, we envision an evolving survey driven by contributions from respondents. We develop three general principles that underlie wiki surveys: they should be greedy, collaborative, and adaptive. Building on these principles, we develop methods for data collection and data analysis for one type of wiki survey, a pairwise wiki survey. Using two proof-of-concept case studies involving our free and open-source website www.allourideas.org, we show that pairwise wiki surveys can yield insights that would be difficult to obtain with other methods.

Also explained in detail in this Vimeo video: https://vimeo.com/51369546

Case-Selection [for case studies]: A Diversity of Methods and Criteria

Gerring, J., Cojocaru, L., 2015. Case-Selection: A Diversity of Methods and Criteria. January 2015 Available as pdf

Excerpt: “Case-selection plays a pivotal role in case study research. This is widely acknowledged, and is implicit in the practice of describing case studies by their method of selection – typical, deviant, crucial, and so forth. It is also evident in the centrality of case-selection in methodological work on the case study, as witnessed by this symposium. By contrast, in large-N cross-case research one would never describe a study solely by its method of sampling. Likewise, sampling occupies a specialized methodological niche within the literature and is not front-and-center in current methodological debates. The reasons for this contrast are revealing and provide a fitting entrée to our subject.

First, there is relatively little variation in methods of sample construction for cross-case research. Most samples are randomly sampled from a known population or are convenience samples, employing all the data on the subject that is available. By contrast, there are myriad approaches to case-selection in case study research, and they are quite disparate, offering many opportunities for researcher bias in the selection of cases (“cherry-picking”).

Second, there is little methodological debate about the proper way to construct a sample in cross-case research. Random sampling is the gold standard and departures from this standard are recognized as inferior. By contrast, in case study research there is no consensus about how best to choose a case, or a small set of cases, for intensive study.

Third, the construction of a sample and the analysis of that sample are clearly delineated, sequential tasks in cross-case research. By contrast, in case study research they blend into one another. Choosing a case often implies a method of analysis, and the method of analysis may drive the selection of cases.

Fourth, because cross-case research encompasses a large sample – drawn randomly or incorporating as much evidence as is available – its findings are less likely to be driven by the composition of the sample. By contrast, in case study research the choice of a case will very likely determine the substantive findings of the case study.

Fifth, because cross-case research encompasses a large sample claims to external validity are fairly easy to evaluate, even if the sample is not drawn randomly from a well-defined population. By contrast, in case study research it is often difficult to say what a chosen case is a case of – referred to as a problem of “casing.”

Finally, taking its cue from experimental research, methodological discussion of cross-case research tends to focus on issues of internal validity, rendering the problem of case-selection less relevant. Researchers want to know whether a study is true for the studied sample. By contrast, methodological discussion of case study research tends to focus on issues of external validity. This could be a product of the difficulty of assessing case study evidence, which tends to demand a great deal of highly specialized subject expertise and usually does not draw on formal methods of analysis that would be easy for an outsider to assess. In any case, the effect is to further accentuate the role of case-selection. Rather than asking whether the case is correctly analyzed readers want to know whether the results are generalizable, and this leads back to the question of case-selection.”

Other recent papers on case selection methods:

Herron, M.C., Quinn, K.M., 2014. A Careful Look at Modern Case Selection Methods. Sociological Methods & Research
 Nielsen, R.A., 2014. Case Selection via Matching. http://www.mit.edu/~rnielsen/Case%20Selection%20via%20Matching.pdf

Multiple Pathways to Policy Impact: Testing an Uptake Theory with QCA

by Barbara Befani, IDS Centre for Development Impact, PRACTICE PAPER. Number 05 October 2013. Available as pdf

Abstract: Policy impact is a complex process influenced by multiple factors. An intermediate step in this process is policy uptake, or the adoption of measures by policymakers that reflect research findings and recommendations. The path to policy uptake often involves activism, lobbying and advocacy work by civil society organisations, so an earlier intermediate step could be termed ‘advocacy uptake’; which would be the use of research findings and recommendations by Civil Society Organisations (CSOs) in their efforts to influence government policy. This CDI Practice Paper by Barbara Befani proposes a ‘broad-brush’ theory of policy uptake (more precisely of ‘advocacy uptake’) and then tests it using two methods: (1) a type of statistical analysis and (2) a variant of Qualitative Comparative Analysis (QCA). The pros and cons of both families of methods are discussed in this paper, which shows that QCA offers the power of generalisation whilst also capturing some of the complexity of middle-range explanation. A limited number of pathways to uptake are identified, which are at the same time moderately sophisticated (considering combinations of causal factors rather than additions) and cover a medium number of cases (40), allowing a moderate degree of generalisation. – See more at: http://www.ids.ac.uk/publication/multiple-pathways-to-policy-impact-testing-an-uptake-theory-with-qca#sthash.HEg4Smra.dpuf

Rick Davies comment: What I  like about this paper is the way it shows, quite simply, how measurements of the contribution of different possible causal conditions in terms of averages, and correlations between these, can be uniformative and even misleading. In contrast, a QCA analysis of the different configurations of causal conditions can be much more enlightening and easier to relate to what are often complex realities in the ground.

I have taken the liberty of re-analysing the fictional data set provided in the annex, using a Decision Tree software (within RapidMiner). This is a means of triangulating the results of QCA analyses. It uses the same kind of data set and produces results which are comparable in structure, but the method of analysis is different. Shown below is a Decision Tree representing seven configurations of conditions that can be found in Befani’s data set of 40 cases. It makes use of 4 of the five conditions described in the paper. These are shown as nodes in the tree diagram.

Befani 2013 10(click on image to enlarge and get a clearer image!)

The 0 and 1 values on the various branches indicate whether the condition immediately above is present or not. The first configuration on the left says that if there is no ACCESS then research UPTAKE (12 cases at the red leaf) does not take place. This is a statement of a sufficient cause. The branch on the right, represents a configuration of three conditions, which says that where ACCESS to research is present, and recommendations are consistent with measures previously (PREV) recommended by the organisation, and where the research findings are disseminated within the organisation by a local ‘champion (CHAMP) then research UPTAKE  (8 cases at the blue leaf) does take place.

Overall the findings shown in the Decision Tree model are consistent with the QCA analyses in terms of the number of configurations (seven) and the configurations that are associated with the largest number of cases (i.e. their coverage). However there were small differences in descriptions of two sets of cases where there was no uptake (red leaves). In the third branch (configuration) from the left above, the QCA analysis indicated that it was the presence of INTERNAL CONFLICT (different approaches to the same policy problem within the organisation) that played a role, rather than the presence of a (perhaps ineffectual) CHAMPION. In the third branch (configuration) from the right the QCA analysis proposed a fourth necessary condition (QUALITY), in addtion to the three shown in the Decision Tree. Here the Decision Tree seems the more parsimonious solution. However, in both sets of cases where differences in findings have occured it would make most sense to then proceed with within-case investigations of the causal processes at work.

PS: Here is the dataset, in case anyone wants to play with it

Real Time Monitoring for the Most Vulnerable

.
.
Greeley, M., Lucas, H. and Chai, J. IDS Bulletin 44.2
Editor Greeley, M. Lucas, H. and Chai, J. Publisher IDS

Purchase a print copy here.

View abstracts online and subscribe to the IDS Bulletin.

Growth in the use of real time digital information for monitoring has been rapid in developing countries across all the social sectors, and in the health sector has been remarkable. Commonly these Real Time Monitoring (RTM) initiatives involve partnerships between the state, civil society, donors and the private sector. There are differences between partners in understanding of objectives,and divergence occurs due to adoption of specific technology-driven approaches and because profit-making is sometimes part of the equation.

With the swarming, especially of pilot mHealth initiatives, in many countries there is risk of chaotic disconnects, of confrontation between rights and profits, and ofoverall failure to encourage appropriate alliances to build sustainable and effective national RTM systems. What is needed is a country-led process for strengthening the quality and equity sensitivity of real-time monitoring initiatives. We propose the development of an effective learning and action agenda centred on the adoption of common standards.

IDS, commissioned and guided by UNICEF Division of Policy and Strategy, has carriedout a multi-country assessment of initiatives that collect high frequency and/or time-sensitive data on risk, vulnerability and access to services among vulnerable children and populations and on the stability and security of livelihoods affected by shocks. The study, entitled Real Time Monitoring for the Most Vulnerable (RTMMV), began with a desk review of existing RTMinitiatives and was followed up with seven country studies (Bangladesh, Brazil,Romania, Senegal, Uganda, Vietnam and Yemen) that further explored and assessed promising initiatives through field-based review and interactive stakeholder workshops. This IDS Bulletin brings together key findings from this research.”

See full list of papers on this topic at the IDS Bulletin  http://www.ids.ac.uk/publication/real-time-monitoring-for-the-most-vulnerable