Overview: An open source document clustering and search tool

Posted on 23 January, 2015 – 7:04 PM

Overview is an open-source tool originally designed to help journalists find stories in large numbers of documents, by automatically sorting them according to topic and providing a fast visualization and reading interface. It’s also used for qualitative research, social media conversation analysis, legal document review, digital humanities, and more. Overview does at least three things really well.

  • Find what you don’t even know to look for.
  • See broad trends or patterns across many documents.
  • Make exhaustive manual reading faster, when all else fails.

Search is a wonderful tool when you know what you’re trying to find — and Overview includes advanced search features. It’s less useful when you start with a hunch or an anonymous tip. Or there might be many different ways to phrase what you’re looking for, or you could be struggling with poor quality material and OCR error. By automatically sorting documents by topic, Overview gives you a fast way to see what you have .

In other cases you’re interested in broad patterns. Overview’s topic tree shows the structure of your document set at a glance, and you can tag entire folders at once to label documents according to your own category names. Then you can export those tags to create visualizations.

Rick Davies Comment: This service could be quite useful in various ways, including clustering sets of Most Significant Change (MSC) stories, or micro-narratives form SenseMaker type exercises, or collections of Twitter tweets found via a key word search. For those interested in the details, and preferring transparency to apparent magic, Overview uses the k-means clustering algorithm, which is explained broadly here. One caveat, the processing of documents can take some time, so you may want to pop out for a cup of coffee while waiting. For those into algorithms, here is a healthy critique of careless use of k-means clustering i.e. not paying attention to when its assumptions about the structure of the underlying data are inappropriate

It is the combination of searching using keywords, and the automatic clustering that seems to be the most useful, to me…so far. Another good feature is the ability to label clusters of interest with one or more tags

I have uploaded 69 blog postings from my Rick on the Road blog. If you want to see how Overview hierarchically clusters these documents let me know, I then will enter your email, which will then let Overview give you access. It seems, so far, that there is no simple way of sharing access (but I am inquiring).

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Research on the use and influence of evaluations: The beginnings of a list

Posted on 16 January, 2015 – 12:15 PM

This is intended to be the start of an accumulating list of references on the subject of evaluation use. Particularly papers that review specific sets or examples of evaluations, rather than talk about the issues in a less grounded way

2014

2012

2009

1997

1986

Related docs

  • Improving the use of monitoring & evaluation processes and findings. Conference Report, Centre for Development Innovation, Wageningen, June 2014  
    • “An existing framework of four areas of factors influencing use …:
      1. Quality factors, relating to the quality of the evaluation. These factors include the evaluation design, planning, approach, timing, dissemination and the quality and credibility of the evidence.
      2. Relational factors: personal and interpersonal; role and influence of evaluation unit; networks,communities of practice.
      3. Organisational factors: culture, structure and knowledge management
      4. External factors, that affect utilisation in ways beyond the influence of the primary stakeholders and the evaluation process.

  • Bibliography provided by ODI, in response to this post Jan 2015. Includes all ODI publications found using keyword “evaluation” – a bit too broad, but still useful

  • ITIG- Utilization of Evaluations- Bibliography. International Development  Evaluation Association. Produced circa 2011/12
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

The Checklist: If something so simple can transform intensive care, what else can it do?

Posted on 25 December, 2014 – 12:02 PM

Fascinating article By ATUL GAWANDE in the New Yorker Magazine, Annals of Medicine DECEMBER 10, 2007 ISSUE

Selected quotes:

There are degrees of complexity, though, and intensive-care medicine has grown so far beyond ordinary complexity that avoiding daily mistakes is proving impossible even for our super-specialists. The I.C.U., with its spectacular successes and frequent failures, therefore poses a distinctive challenge: what do you do when expertise is not enough?

The checklists provided two main benefits, Pronovost observed. First, they helped with memory recall, especially with mundane matters that are easily overlooked in patients undergoing more drastic events. A second effect was to make explicit the minimum, expected steps in complex processes. Pronovost was surprised to discover how often even experienced personnel failed to grasp the importance of certain precautions.

In the Keystone Initiative’s first eighteen months, the hospitals saved an estimated hundred and seventy-five million dollars in costs and more than fifteen hundred lives. The successes have been sustained for almost four years—all because of a stupid little checklist.

But the prospect pushes against the traditional culture of medicine, with its central belief that in situations of high risk and complexity what you want is a kind of expert audacity—the right stuff, again. Checklists and standard operating procedures feel like exactly the opposite, and that’s what rankles many people.

“The fundamental problem with the quality of American medicine is that we’ve failed to view delivery of health care as a science. The tasks of medical science fall into three buckets. One is understanding disease biology. One is finding effective therapies. And one is insuring those therapies are delivered effectively. That third bucket has been almost totally ignored by research funders, government, and academia. It’s viewed as the art of medicine. That’s a mistake, a huge mistake. And from a taxpayer’s perspective it’s outrageous.

Which was followed by this book: The Checklist Manifesto: How to Get Things Right – January 4, 2011

If its good enough for surgeons and airline pilots, is it good enough for evaluators?

See also this favorite paper of mine by Scriven : “THE LOGIC AND METHODOLOGY OF CHECKLISTS, 2005

Procedures for the use of the humble checklist, while no one would deny their utility, in evaluation and elsewhere, are usually thought to fall somewhat below the entry level of what we call a methodology, let alone a theory. But many checklists used in evaluation incorporate a quite complex theory, or at least a set of assumptions, which we are well advised to uncover— and the process of validating an evaluative checklist is a task calling for considerable sophistication. Interestingly, while the theory underlying a checklist is less ambitious than the kind that we normally call program theory, it is often all the theory we need for an evaluation.

Here is a list of evaluation checklists, courtesy of Michegan State University

Serious question: How do you go about constructing good versus useless/ineffective checklists? Is there a meta-checklist covering this task? :-)

Here is one reader’s attempt at such a meta-checklist: http://www.marketade.com/old/checklist-manifesto-book-review.html

VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Predictive Analytics and Data Mining: Concepts and Practice with RapidMiner

Posted on 20 December, 2014 – 4:25 PM

Author(s) : Kotu & Deshpande Release Date: 05 Dec 2014 Published by Morgan Kaufmann Print Book ISBN :9780128014608 eBook ISBN :9780128016503 Pages: 446

Look inside the book here

Key Features

  • Demystifies data mining concepts with easy to understand language
  • Shows how to get up and running fast with 20 commonly used powerful techniques for predictive analysis
  • Explains the process of using open source RapidMiner tools
  • Discusses a simple 5 step process for implementing algorithms that can be used for performing predictive analytics
  • Includes practical use cases and examples

Chapter headings

  • Introduction
  • Data Mining Process
  • Data Exploration
  • Classification
  • Regression
  • Association
  • Clustering
  • Model Evaluation
  • Text Mining
  • Time Series
  • Anomaly Detection
  • Advanced Data Mining
  • Getting Started with RapidMiner

Rick Davies comment: This looks like a very useful book and I have already ordered a copy. Rapid Miner is a a free open source suite of data mining algorithms that can be assembled as modules, according to purpose. I have used Rapid Miner a lot for one specific purpose, to construct Decision Tree models of relationships between project context and intervention conditions and project outcomes. For more on data mining, and Decision Trees in particular, see my Data Mining posting on the Better Evaluation website

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Anecdote Circles: Monitoring Change in Market Systems Through Storytelling

Posted on 19 December, 2014 – 10:40 AM

by The SEEP Network on Dec 16, 2014  A video presentation  A pdf is also available

“In this third webinar of the series, Daniel Ticehurst, of DAI, spoke about a tool/process now called Anecdote Circles. Such circles are similar to focus group interviews/discussions and beneficiary assessments of the 1980’s: they create a space for market actors to share their experiences in a warm and friendly environment. They are mini social information networks where people can make sense of their reality through storytelling and agree on new or corrective actions. Setting them up and carrying them out tests the capacity of all involved to listen, make sense of and leverage the stories told to promote joint action. Daniel talked about why he thinks the Circles can be important for facilitators of market development and the benefits and the challenges he has faced in its application in Malawi and Tanzania”

The Learning with the Toolmakers webinar series, supported by USAID’s LEO project and hosted by SEEP’s Market Facilitation Initiative (MaFI)

Rick Davies comment: Interesting to see how the focus in these Anecdote Circles, as described in Malawi in the early 1990s, is on the service providers (e.g extension workers, community development workers) in direct contact with communities. Not on the community members themselves. The same was the case with my first use of MSC in Bangladesh, also in the 1990s. The assumption in my case, and possibly in Daniel’s case, was that these front line workers, accumulate lots of knowledge, often informal and tacit, and that this knowledge could usefully be tapped into and put directly to work through the use of sympathetic methods. Also of interest to me was the suggested list of prompt questions, designed to kick start discussions around anecdotes, like “Where were you surprised?…disappointed?…pleased? when you were talking to people in the community”. This reminded me of Irene Guijt’s book “Seeking Surprise

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

DIGITAL HUMANITARIANS: How Big Data is Changing the Face of Humanitarian Response

Posted on 19 December, 2014 – 10:02 AM

By Patrick, Meier, Francis & Taylor Press, January 15, 2015 See: http://digital-humanitarians.com/

“The overflow of information generated during disasters can be as paralyzing to humanitarian response as the lack of information. This flash flood of information is often referred to as Big Data, or Big Crisis Data. Making sense of Big Crisis Data is proving to be an impossible challenge for traditional humanitarian organizations, which is precisely why they’re turning to Digital Humanitarians.”

The Rise of the Digital Humanitarians

Charts the sudden rise of Digital Humanitarians during the 2010 Haiti Earthquake. This was the first time that thousands of digital volunteers mobilized online to support search and rescue efforts and human relief operations on the ground. These digital humanitarians used crowdsourcing to make sense of social media, text messages and satellite imagery, creating unique digital crisis maps that reflected the situation on the ground in near real-time.

The Rise of Big (Crisis) Data

Introduces the notion of Big Data and addresses concerns around the use of Big (Crisis) Data for humanitarian response. These include data bias, discrimination, false data and threats to privacy. The chapter draws on several stories to explain why the two main concerns for the future of digital humanitarian response are: Big (Size) Data and Big (False) Data. As such, the first two chapters of the book set the stage for the main stories that follow.

Crowd Computing Social Media

Begins with the digital humanitarian response to massive forest fires in Russia and traces the evolution of digital humanitarians through subsequent digital deployments in Libya, the Philippines and beyond. This evolution sees a shift towards the use of a smarter crowdsourcing approach—called crowd computing—to make sense of Big Crisis Data. The chapter describes the launch of the Digital Humanitarian Network (DHN), co-founded by the United Nations.

Crowd Computing  Satellite & Aerial Imagery

Considers the application of crowd computing to imagery captured by orbiting satellites and flying drones (or UAVs). The chapter begins with the most massive digital crowdsearching effort ever carried out and contrasts this to a related UN project in Somalia. The chapter then describes an exciting project driven by a new generation of satellites and digital humanitarians. The chapter also highlights the rise of humanitarian UAVs and explains the implications for the future of disaster response.

Artificial Intelligence for Disaster Response

Returns to social media as a source of Big Data and explains why crowd computing alone may only be part of the solution. The chapter introduces concepts from advanced computing and artificial intelligence—such as data mining and machine learning—to explain how these are already being used to make sense of Big Data during disasters. The chapter highlights how digital humanitarians have been using these new techniques in response to the crisis in Syria. The chapter also describes how artificial intelligence is also being used to make sense of vast volumes of text messages (SMS).

Artificial Intelligence in the Sky

Extends the use of artificial intelligence and machine learning to the world of satellite and aerial imagery. The chapter draws on examples from Haiti and the Philippines to describe the very latest breakthroughs in automated imagery analysis. The chapter then highlights how these automated techniques are also being applied to rapidly analyze aerial imagery of disaster zones captured by UAVs.

Verifying Big Crisis Data

Begins to tackle the challenge of Big (False) Data—that is, misinformation and disinformation generated on social media during disasters. The chapter opens with the verification challenges that digital humanitarians faced in Libya and Russia. Concrete strategies for the verification of social media are presented by drawing on the expertise of multiple digital detectives across the world. The chapter then considers the use of crowdsourcing to verify social media during disasters, highlighting a novel and promising new project inspired by the search for red balloons.

Verifying Big Data with Artificial Intelligence

Highlights how artificial intelligence and machine learning can be used to verify user-generated content posted on social media during disasters. Drawing on the latest scientific research, the chapter makes a case for combining traditional investigative journalism strategies with new technologies powered by artificial intelligence. The chapter introduce a new project that enables anyone to automatically compute the credibility of tweets.

Dictators versus Digital Humanitarians

Considers a different take on digital humanitarians by highlighting how their efforts turn to digital activism in countries under repressive rule. The chapter provides an intimate view into the activities of digital humanitarians in the run-up to the Egyptian Revolution. The chapter then highlights how digital activists from China and Iran are drawing on their experience in civil resistance when responding disasters. These experiences suggest that crowdsourced humanitarian response improves civil resistance and vice versa.

Next-Generation Digital Humanitarians

Distills some of the lessons that digital humanitarians can learn from digital activists in repressive countries. These lessons and best practices highlight the importance of developing innovative policies and not just innovative technologies. The importance of forward-thinking policy-solutions pervades the chapter; from the use of cell phone data to spam filters and massive multiplayer online games. Technology alone won’t solve the myriad of challenges that digital humanitarians face. Enlightened leadership and forward-thinking policy-making are equally—if not more important than— breakthroughs in humanitarian technology. The chapter concludes by highlighting key trends that are likely to define the next generation of digital humanitarians.

Rick Davies comment: Re the chapter on Artificial Intelligence for Disaster Response and the references therein to data mining and machine learning, readers will find plenty of references to the usefulness of Decision Tree algorithms on my Rick on the Road blog

And as a keen walker and cyclist I can recommend readers check out the crowdsourced OpenStreetMap project, which makes available good quality detailed and frequently updated maps of many parts of the world. I have contributed in a small way by correcting and adding to street names in central Mogadishu, based on my own archival sources. I was also impressed to see that “road” routes in northern Somalia, where I once lived, are much more detailed than any other source that I have come across.

VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Better Value for Money. An organising framework for management and measurement of VFM indicators

Posted on 26 November, 2014 – 12:41 PM

by Julian Barr and Angela Christie, 2014. ITAD. 6 pages  Available as pdf

“Value for money suffers from being a phrase that is more used than understood. We all instinctively believe we understand the terms since we all regularly seek value for money in the things we buy. Yet, once Value for Money attains capital letters and an acronym – VFM – putting the concept into practice becomes more elusive.

The drivers for VFM, stem from the prevailing austerity in economies of major aid donor countries. VFM has become a watchword in the management of UK public expenditure, and particularly so in DFID, where a strong political commitment to a rising aid budget has been matched by an equal determination to secure greatest value from the investment.

The ‘3Es definition’ of Value for Money is now in common currency, providing a framework for analysis shaped by Economy, Efficiency and Effectiveness. More recently a fourth E has been added to the VFM mix in the shape of
equity, conveying the message that development is only of value if it is also fair. Overall guidance on the application of the principles of the 4Es has been fairly general. VFM itself has been a principle enforced rigorously, but lacking practical methodological guidance. There continues to be patchy success in translating the 3 and 4 Es into operations.

This paper provides an organising framework that attempts to provide a means to better understand, express and enable judgements to be reached on Value for Money in development programmes.

Our framework is based on, but evolves, the 4Es approach. It aims to do two things:
i) Bring the dimensions of value and money together consistently in the way VFM is considered
ii) Introduce two ways to categorise VFM indicators to help assess their utility in managing and measuring Value for Money”

Rick Davies Comment: See this accumulating bibliography on papers on Value for Money also available on this MandE NEWS website

VN:F [1.9.22_1171]
Rating: 0 (from 2 votes)

MAKING EVALUATION SENSITIVE TO GENDER AND HUMAN RIGHTS Different approaches

Posted on 26 November, 2014 – 12:34 PM

(via pelican email list)

By Juan Andres Ligero Lasa, Julia Espinosa Fajardo, Carmen Mormeneo Cortes, MarÍa Bustelo Ruesta
Published June 2014© Spanish MInistry of Foreign Affairs and Cooperation
Secretary of State for Internactional Cooperation and for Ibero-America
General Secretary of International Cooperation for Development
Available as pdf

Contents:

1. introduction 
2. preliminary concepts 
2.1. Sensitive evaluation
2.2. The gender perspective or GID approach
2.3. The human rights-based approach to development (HRBA)
3. document preparation 
3.1. Systematic classifi cation of the literature and expert opinions
3.2. Synthesis and classifi cation
3.3. Guidance and criteria for selection of a proposal
4. proposals for sensitive evaluations
4.1. The Commission
a) Institutional sensitivity
b) Evaluator outlook
4.2. Unit defi nition and design evaluation
a) Point of departure: Programming
b) Identifying the programme theory or logic model
c) Analysis and comparison
4.3. Evaluation approach
a) Evaluation driven by theory of change
b) Stakeholder-driven evaluation approach
c) Evaluation approach driven by critical change or a transformative paradigm
d) Judgement-driven summative evaluation approach
4.4. Operationalisation
a) Vertical work
b) Horizontal work: Definition of systems of measurement, indicators and sources
4.5. Methodology and Techniques
4.6. Fieldwork
4.7. Data analysis and interpretation
4.8. Judgement
a) Transformative interventions for gender and rights situations
b) Interventions that preserve the status quo
c) Interventions that damage or worsen the situation
4.9. Reporting of Outcomes
5. Guidelines for Sensitive Evaluation
5.1. Considerations on the Evaluation of Programme Design
5.2. Considerations on Evaluator Outlook
5.3. Incorporating Approaches into Evaluation Design
a) Evaluation driven by theory of change
b) Stakeholder-driven
c) Critical change-driven or transformative paradigm
d) Judgement-driven summative evaluation
5.4. Considerations on Operationalisation
5.5. Considerations on Techniques, Methods and Fieldwork
5.6. Considerations on the Interpretation Phase
5.7. Consideartions on Judgement
6. How to coodinate the Gender – and HRBA-Based  approaches 
7. Some Considerations on the Process 

VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Livelihoods Monitoring and Evaluation: A Rapid Desk Based Study

Posted on 19 November, 2014 – 8:22 PM

by Kath Pasteur, 2014, 24 pages. Found here: http://www.evidenceondemand.info/livelihoods-monitoring-and-evaluation-a-rapid-desk-based-study

Abstract: “This report is the outcome of a rapid desk study to identify and collate the current state of evidence and best practice for monitoring and evaluating programmes that aim to have a livelihoods impact. The study identifies tried and tested approaches and indicators that can be applied across a range of livelihoods programming. The main focus of the report is an annotated bibliography of literature sources relevant to the theme. The narrative report highlights key themes and examples from the literature relating to methods and indicators. This collection of resources is intended to form the starting point for a more thorough organisation and analysis of material for the final formation of a Topic Guide on Livelihoods Indicators. This report has been produced by Practical Action Consulting for Evidence on Demand with the assistance of the UK Department for International Development (DFID) contracted through the Climate, Environment, Infrastructure and Livelihoods Professional Evidence and Applied Knowledge Services (CEIL PEAKS) programme, jointly managed by HTSPE Limited and IMC Worldwide Limited”

Full reference: Pasteur, K. Livelihoods monitoring and evaluation: A rapid desk based study. Evidence on Demand, UK (2014) 24 pp. [DOI: http://dx.doi.org/10.12774/eod_hd.feb2014.pasteur]

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Process evaluation of complex interventions. UK Medical Research Council (MRC) guidance

Posted on 10 November, 2014 – 7:22 PM

(copied from here: http://decipher.uk.net/process-evaluation-guidance/)

“Updated MRC guidance for evaluation of complex interventions published in 2008 (Craig et al.2008) highlighted the value of process evaluation within trials of complex interventions in order to understand implementation, the mechanisms through which interventions produce change, and the role of context in shaping implementation and effectiveness. However, it provided limited insight into how to conduct a good quality process evaluation.

New MRC guidance for process evaluation of complex interventions has been produced on behalf of the MRC Population Health Sciences Research Network by a group of 11 health researchers from 8 universities, in consultation with a wider stakeholder group. The author group was chaired by Dr Janis Baird, MRC Lifecourse Epidemiology Unit, University of Southampton. The development of the guidance was led by Dr Graham Moore, DECIPHer, Cardiff University.

The document begins with an introductory chapter which sets out the reasons why we need process evaluation, before presenting a new framework which expands on the aims for process evaluation identified within the 2008 complex interventions guidance (implementation, mechanisms of impact and context). It then presents discrete sections on process evaluation theory (Section A) and process evaluation practice (Section B), before offering a number of detailed case studies from process evaluations conducted by the authors (Section C).

The guidance has received endorsement and support from the MRC’s Population Health Science Group and Methodology Research Panel, as well as NIHR NETSCC. An abridged version will also follow shortly.

You can download the 2014 guidance (pdf) by clicking here.

An editorial in the BMJ explains why process evaluation is key to public health research, and why new guidance is needed. The editorial is available, open access, here.

If you have any queries, please contact Dr. Graham Moore: MooreG@cardiff.ac.uk.”

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)