Anecdote Circles: Monitoring Change in Market Systems Through Storytelling

by The SEEP Network on Dec 16, 2014  A video presentation  A pdf is also available

“In this third webinar of the series, Daniel Ticehurst, of DAI, spoke about a tool/process now called Anecdote Circles. Such circles are similar to focus group interviews/discussions and beneficiary assessments of the 1980’s: they create a space for market actors to share their experiences in a warm and friendly environment. They are mini social information networks where people can make sense of their reality through storytelling and agree on new or corrective actions. Setting them up and carrying them out tests the capacity of all involved to listen, make sense of and leverage the stories told to promote joint action. Daniel talked about why he thinks the Circles can be important for facilitators of market development and the benefits and the challenges he has faced in its application in Malawi and Tanzania”

The Learning with the Toolmakers webinar series, supported by USAID’s LEO project and hosted by SEEP’s Market Facilitation Initiative (MaFI)

Rick Davies comment: Interesting to see how the focus in these Anecdote Circles, as described in Malawi in the early 1990s, is on the service providers (e.g extension workers, community development workers) in direct contact with communities. Not on the community members themselves. The same was the case with my first use of MSC in Bangladesh, also in the 1990s. The assumption in my case, and possibly in Daniel’s case, was that these front line workers, accumulate lots of knowledge, often informal and tacit, and that this knowledge could usefully be tapped into and put directly to work through the use of sympathetic methods. Also of interest to me was the suggested list of prompt questions, designed to kick start discussions around anecdotes, like “Where were you surprised?…disappointed?…pleased? when you were talking to people in the community”. This reminded me of Irene Guijt’s book “Seeking Surprise

DIGITAL HUMANITARIANS: How Big Data is Changing the Face of Humanitarian Response

By Patrick, Meier, Francis & Taylor Press, January 15, 2015 See: http://digital-humanitarians.com/

“The overflow of information generated during disasters can be as paralyzing to humanitarian response as the lack of information. This flash flood of information is often referred to as Big Data, or Big Crisis Data. Making sense of Big Crisis Data is proving to be an impossible challenge for traditional humanitarian organizations, which is precisely why they’re turning to Digital Humanitarians.”

The Rise of the Digital Humanitarians

Charts the sudden rise of Digital Humanitarians during the 2010 Haiti Earthquake. This was the first time that thousands of digital volunteers mobilized online to support search and rescue efforts and human relief operations on the ground. These digital humanitarians used crowdsourcing to make sense of social media, text messages and satellite imagery, creating unique digital crisis maps that reflected the situation on the ground in near real-time.

The Rise of Big (Crisis) Data

Introduces the notion of Big Data and addresses concerns around the use of Big (Crisis) Data for humanitarian response. These include data bias, discrimination, false data and threats to privacy. The chapter draws on several stories to explain why the two main concerns for the future of digital humanitarian response are: Big (Size) Data and Big (False) Data. As such, the first two chapters of the book set the stage for the main stories that follow.

Crowd Computing Social Media

Begins with the digital humanitarian response to massive forest fires in Russia and traces the evolution of digital humanitarians through subsequent digital deployments in Libya, the Philippines and beyond. This evolution sees a shift towards the use of a smarter crowdsourcing approach—called crowd computing—to make sense of Big Crisis Data. The chapter describes the launch of the Digital Humanitarian Network (DHN), co-founded by the United Nations.

Crowd Computing  Satellite & Aerial Imagery

Considers the application of crowd computing to imagery captured by orbiting satellites and flying drones (or UAVs). The chapter begins with the most massive digital crowdsearching effort ever carried out and contrasts this to a related UN project in Somalia. The chapter then describes an exciting project driven by a new generation of satellites and digital humanitarians. The chapter also highlights the rise of humanitarian UAVs and explains the implications for the future of disaster response.

Artificial Intelligence for Disaster Response

Returns to social media as a source of Big Data and explains why crowd computing alone may only be part of the solution. The chapter introduces concepts from advanced computing and artificial intelligence—such as data mining and machine learning—to explain how these are already being used to make sense of Big Data during disasters. The chapter highlights how digital humanitarians have been using these new techniques in response to the crisis in Syria. The chapter also describes how artificial intelligence is also being used to make sense of vast volumes of text messages (SMS).

Artificial Intelligence in the Sky

Extends the use of artificial intelligence and machine learning to the world of satellite and aerial imagery. The chapter draws on examples from Haiti and the Philippines to describe the very latest breakthroughs in automated imagery analysis. The chapter then highlights how these automated techniques are also being applied to rapidly analyze aerial imagery of disaster zones captured by UAVs.

Verifying Big Crisis Data

Begins to tackle the challenge of Big (False) Data—that is, misinformation and disinformation generated on social media during disasters. The chapter opens with the verification challenges that digital humanitarians faced in Libya and Russia. Concrete strategies for the verification of social media are presented by drawing on the expertise of multiple digital detectives across the world. The chapter then considers the use of crowdsourcing to verify social media during disasters, highlighting a novel and promising new project inspired by the search for red balloons.

Verifying Big Data with Artificial Intelligence

Highlights how artificial intelligence and machine learning can be used to verify user-generated content posted on social media during disasters. Drawing on the latest scientific research, the chapter makes a case for combining traditional investigative journalism strategies with new technologies powered by artificial intelligence. The chapter introduce a new project that enables anyone to automatically compute the credibility of tweets.

Dictators versus Digital Humanitarians

Considers a different take on digital humanitarians by highlighting how their efforts turn to digital activism in countries under repressive rule. The chapter provides an intimate view into the activities of digital humanitarians in the run-up to the Egyptian Revolution. The chapter then highlights how digital activists from China and Iran are drawing on their experience in civil resistance when responding disasters. These experiences suggest that crowdsourced humanitarian response improves civil resistance and vice versa.

Next-Generation Digital Humanitarians

Distills some of the lessons that digital humanitarians can learn from digital activists in repressive countries. These lessons and best practices highlight the importance of developing innovative policies and not just innovative technologies. The importance of forward-thinking policy-solutions pervades the chapter; from the use of cell phone data to spam filters and massive multiplayer online games. Technology alone won’t solve the myriad of challenges that digital humanitarians face. Enlightened leadership and forward-thinking policy-making are equally—if not more important than— breakthroughs in humanitarian technology. The chapter concludes by highlighting key trends that are likely to define the next generation of digital humanitarians.

Rick Davies comment: Re the chapter on Artificial Intelligence for Disaster Response and the references therein to data mining and machine learning, readers will find plenty of references to the usefulness of Decision Tree algorithms on my Rick on the Road blog

And as a keen walker and cyclist I can recommend readers check out the crowdsourced OpenStreetMap project, which makes available good quality detailed and frequently updated maps of many parts of the world. I have contributed in a small way by correcting and adding to street names in central Mogadishu, based on my own archival sources. I was also impressed to see that “road” routes in northern Somalia, where I once lived, are much more detailed than any other source that I have come across.

Better Value for Money. An organising framework for management and measurement of VFM indicators

by Julian Barr and Angela Christie, 2014. ITAD. 6 pages  Available as pdf

“Value for money suffers from being a phrase that is more used than understood. We all instinctively believe we understand the terms since we all regularly seek value for money in the things we buy. Yet, once Value for Money attains capital letters and an acronym – VFM – putting the concept into practice becomes more elusive.

The drivers for VFM, stem from the prevailing austerity in economies of major aid donor countries. VFM has become a watchword in the management of UK public expenditure, and particularly so in DFID, where a strong political commitment to a rising aid budget has been matched by an equal determination to secure greatest value from the investment.

The ‘3Es definition’ of Value for Money is now in common currency, providing a framework for analysis shaped by Economy, Efficiency and Effectiveness. More recently a fourth E has been added to the VFM mix in the shape of
equity, conveying the message that development is only of value if it is also fair. Overall guidance on the application of the principles of the 4Es has been fairly general. VFM itself has been a principle enforced rigorously, but lacking practical methodological guidance. There continues to be patchy success in translating the 3 and 4 Es into operations.

This paper provides an organising framework that attempts to provide a means to better understand, express and enable judgements to be reached on Value for Money in development programmes.

Our framework is based on, but evolves, the 4Es approach. It aims to do two things:
i) Bring the dimensions of value and money together consistently in the way VFM is considered
ii) Introduce two ways to categorise VFM indicators to help assess their utility in managing and measuring Value for Money”

Rick Davies Comment: See this accumulating bibliography on papers on Value for Money also available on this MandE NEWS website

MAKING EVALUATION SENSITIVE TO GENDER AND HUMAN RIGHTS Different approaches

(via pelican email list)

By Juan Andres Ligero Lasa, Julia Espinosa Fajardo, Carmen Mormeneo Cortes, MarÍa Bustelo Ruesta
Published June 2014© Spanish MInistry of Foreign Affairs and Cooperation
Secretary of State for Internactional Cooperation and for Ibero-America
General Secretary of International Cooperation for Development
Available as pdf

Contents:

1. introduction 
2. preliminary concepts 
2.1. Sensitive evaluation
2.2. The gender perspective or GID approach
2.3. The human rights-based approach to development (HRBA)
3. document preparation 
3.1. Systematic classifi cation of the literature and expert opinions
3.2. Synthesis and classifi cation
3.3. Guidance and criteria for selection of a proposal
4. proposals for sensitive evaluations
4.1. The Commission
a) Institutional sensitivity
b) Evaluator outlook
4.2. Unit defi nition and design evaluation
a) Point of departure: Programming
b) Identifying the programme theory or logic model
c) Analysis and comparison
4.3. Evaluation approach
a) Evaluation driven by theory of change
b) Stakeholder-driven evaluation approach
c) Evaluation approach driven by critical change or a transformative paradigm
d) Judgement-driven summative evaluation approach
4.4. Operationalisation
a) Vertical work
b) Horizontal work: Definition of systems of measurement, indicators and sources
4.5. Methodology and Techniques
4.6. Fieldwork
4.7. Data analysis and interpretation
4.8. Judgement
a) Transformative interventions for gender and rights situations
b) Interventions that preserve the status quo
c) Interventions that damage or worsen the situation
4.9. Reporting of Outcomes
5. Guidelines for Sensitive Evaluation
5.1. Considerations on the Evaluation of Programme Design
5.2. Considerations on Evaluator Outlook
5.3. Incorporating Approaches into Evaluation Design
a) Evaluation driven by theory of change
b) Stakeholder-driven
c) Critical change-driven or transformative paradigm
d) Judgement-driven summative evaluation
5.4. Considerations on Operationalisation
5.5. Considerations on Techniques, Methods and Fieldwork
5.6. Considerations on the Interpretation Phase
5.7. Consideartions on Judgement
6. How to coodinate the Gender – and HRBA-Based  approaches 
7. Some Considerations on the Process 

Livelihoods Monitoring and Evaluation: A Rapid Desk Based Study

by Kath Pasteur, 2014, 24 pages. Found here: http://www.evidenceondemand.info/livelihoods-monitoring-and-evaluation-a-rapid-desk-based-study

Abstract: “This report is the outcome of a rapid desk study to identify and collate the current state of evidence and best practice for monitoring and evaluating programmes that aim to have a livelihoods impact. The study identifies tried and tested approaches and indicators that can be applied across a range of livelihoods programming. The main focus of the report is an annotated bibliography of literature sources relevant to the theme. The narrative report highlights key themes and examples from the literature relating to methods and indicators. This collection of resources is intended to form the starting point for a more thorough organisation and analysis of material for the final formation of a Topic Guide on Livelihoods Indicators. This report has been produced by Practical Action Consulting for Evidence on Demand with the assistance of the UK Department for International Development (DFID) contracted through the Climate, Environment, Infrastructure and Livelihoods Professional Evidence and Applied Knowledge Services (CEIL PEAKS) programme, jointly managed by HTSPE Limited and IMC Worldwide Limited”

Full reference: Pasteur, K. Livelihoods monitoring and evaluation: A rapid desk based study. Evidence on Demand, UK (2014) 24 pp. [DOI: http://dx.doi.org/10.12774/eod_hd.feb2014.pasteur]

Process evaluation of complex interventions. UK Medical Research Council (MRC) guidance

(copied from here: http://decipher.uk.net/process-evaluation-guidance/)

“Updated MRC guidance for evaluation of complex interventions published in 2008 (Craig et al.2008) highlighted the value of process evaluation within trials of complex interventions in order to understand implementation, the mechanisms through which interventions produce change, and the role of context in shaping implementation and effectiveness. However, it provided limited insight into how to conduct a good quality process evaluation.

New MRC guidance for process evaluation of complex interventions has been produced on behalf of the MRC Population Health Sciences Research Network by a group of 11 health researchers from 8 universities, in consultation with a wider stakeholder group. The author group was chaired by Dr Janis Baird, MRC Lifecourse Epidemiology Unit, University of Southampton. The development of the guidance was led by Dr Graham Moore, DECIPHer, Cardiff University.

The document begins with an introductory chapter which sets out the reasons why we need process evaluation, before presenting a new framework which expands on the aims for process evaluation identified within the 2008 complex interventions guidance (implementation, mechanisms of impact and context). It then presents discrete sections on process evaluation theory (Section A) and process evaluation practice (Section B), before offering a number of detailed case studies from process evaluations conducted by the authors (Section C).

The guidance has received endorsement and support from the MRC’s Population Health Science Group and Methodology Research Panel, as well as NIHR NETSCC. An abridged version will also follow shortly.

You can download the 2014 guidance (pdf) by clicking here.

An editorial in the BMJ explains why process evaluation is key to public health research, and why new guidance is needed. The editorial is available, open access, here.

If you have any queries, please contact Dr. Graham Moore: MooreG@cardiff.ac.uk.”

Looking for case studies of beneficiary feedback in evaluation

[From Lesley Groves]
Dear MandE
I have been commissioned by the UK Department for International Development to produce a short practical note on incorporating beneficiary feedback within evaluation. I am exploring questions such as: How do we define beneficiary feedback in evaluation? How is it different from participatory evaluation/ participatory methods in evaluation? What is the added value? What are the practical implications (ethical/ logistical/ practical)? What is a reasonable requirement for beneficiary feedback in evaluation?
I would really welcome thoughts from this community on these questions. There are three ways in which I hope to engage with some of you
3) Through email for those of you who may wish to contact me directly (lesliecgroves@gmail.com)
I am also looking for case study examples from anyone who has/is engaging beneficiary feedback mechanisms in evaluation. Would be great to hear from you.
With many thanks and best regards to all
Leslie
L?eslie Groves Williams (PhD)

Senior Social Development Consultant
Skype lesliegroves
 | http://www.linkedin.com/in/lesliegroves

Rick Davies comment: I understand Lesley is developing a bibliography which will be made available online, via her blog site. And I expect that her report, once accepted by DFID, will be made publicly available. If so, details will be posted here.
Some material that is already emerging via email list inquiries:

The use of Rasch scales in monitoring, gender analysis and attitude measurement

 

Monitoring systems occasionally incorporate elements of action research. A water and sanitation project in northern Bangladesh elaborated a Gender Analytic Framework to help organize public conversations on gender roles in household, village and local government. Trained volunteers facilitated sessions and recorded responses to 29 gender-related items in 988 villages over four years. “I myself went to see the Chairman!” analyzes the change in gender role attitudes with the help of Rasch scales. The tool achieves two things: It provides a summary attitude measure out of a large heterogeneous material. It thus makes gender attitudes amenable to analysis in terms of community baseline attributes, WatSan project inputs and pre-existing local attitudes.

Benini, Aldo, Reazul Karim et al.: “I myself went to see the Chairman!”- Change in gender role attitudes in a water and sanitation project in northern Bangladesh. An analysis of DASCOH’s Gender Analytical Framework data, 2011 – 2014. Rashahi and Sunamganj, Bangladesh: DASCOH – Development Association for Self-reliance, Communication and Health, 2014.

 

 

Evidence of the Hawthorne effect – worth knowing about and watching out for

(copied from the World Bank “Development Impact” blog)

Quantifying the Hawthorne Effect

Submitted by Jed Friedman on 2014/10/16 This post is co-authored with Brinda Gokul

Many who work on impact evaluation are familiar with the concept of the Hawthorne effect and its potential risk to the accurate inference of causal impact. But if this is a new concept, let’s quickly review the definition and history of the Hawthorne effect:

  • The Hawthorne effect refers to study participants’ alteration of behavior solely as a result of being observed (rather than as a result of the intervention). Hence for the effect to exist it is necessary for the subjects to realize they are under observation. The term originates from the Western Electric Company’s Hawthorne Works Plant in Chicago where, in the late 1920s and early 1930s, researchers tried to study the effects of altered workplace lighting on worker productivity. It turned out that worker productivity improved when the lighting was increased, but also improved when the lighting was dimmed. Indeed it became apparent that whenever a change was implemented, such as a change in work hours, productivity improved for a period of time. The conclusion: productivity was not being affected by the changes in workplace conditions but instead by the self-knowledge of workers that they were under observation.

So the Hawthorne effect may present a challenge to the validity of causal inference (when agents respond to the knowledge they are being studied rather than respond to the changed environment as a result of the intervention) or may present a challenge to the accuracy of measurement (when the fact of observation alters the behavior measured). Clearly any effect magnitude, and indeed whether the effect arises at all, depends on the study context including the type of behavior observed. Yet only a handful of studies have attempted to identify and measure the Hawthorne effect.

My colleague Brinda Gokul and I recently reviewed the health economics and public health literature that explicitly study the effect in the general field of health. This is a hard question to get at, but some inventive studies, at times utilizing new technology, have given us some fascinating results. (It’s a fairly nascent literature, and at the bottom of this post we list the papers that we have found.)

With respect to behavior of health providers in developing countries, one of the more extensive studies in understanding the Hawthorne effect was conducted in the Arusha region of Tanzania and resulted in a series of papers by Kenneth Leonard and Melkiory Masatu. The challenge in measuring the Hawthorne effect is that we need to also have “stealth” data on subject behaviors when they are not aware of observation. The Tanzania’s study trick was to use patient recall interviews, conducted soon after the clinic visit, to reconstruct the actions of the clinicians and specifically their adherence to proper medical protocols. This stealth data is then compared with what is recorded by trained enumerators when they observe the clinician treating patients.

Of course the first step is to validate the accuracy of the “stealth” patient recall data, which the investigators do by comparing the enumerator observation record with patient recall data for the patient visits that were explicitly observed. It turns out there is a high degree of concurrence, with agreement between observer and patient on approximately 70% of the items measured.

Prior to the arrival of the research team, patient recall measured an average 53% baseline adherence to medical protocol by health providers, after the arrival of the research team, this adherence increased by almost 10 percentage points (or 20%). And it appears that the adherence to protocol actually matters, as cases that have higher adherence also have higher rates of accurate diagnosis and higher patient satisfaction. Of note is that the Hawthorne effect was rather short lived – providers returned to baseline level of adherence after 10-15 observed patient visits. I’ve been told of this same effect by education researchers who say that teachers habituate quite rapidly to video observation and return to “normal” levels of practice in within a day or so after the introduction of the camera.

Another example is from the Indian state of Orissa where investigators evaluating sanitation efforts to increase latrine use and reduce open defecation took a technological approach to the collection of stealth data: they developed a small hidden device to monitor latrine activity that recorded the times when the latrine door was opened. They called this device a PLUM – passive latrine use monitor. Amongst households with a latrine, the PLUM recorded members of the household using the latrine on average 2.11 times in the morning and 0.53 times in the afternoon.

Once this baseline data was collected, the study induced the Hawthorne effect by sending an observer to stand outside selected latrines for a five hour period in order to validate the passive monitor. It turned out that when a known observer was in place, household latrine compliance increased in the morning from 2.11 times to 2.71 and in the afternoon from 0.53 to 1.03 times. (Clasen, 2012).

For several healthcare studies in Europe, Canada and the United States, results also identified the Hawthorne effect. Here is a selection of findings:

  • By contrasting hand sanitation practices recorded discretely through a real-time location system (RLTS) – which recorded all dispensations from hand sanitizer stations as well as the movement of explicit hygiene auditors in an acute care hospital – researchers identified the Hawthorne effect in an acute care research hospital. The researchers recorded a rate of 3.75 dispensations per hour when the sanitizer station was visible to the auditor compared to 1.48 dispensations from a station not currently under observation. In contrast, there were only 1.07 dispensations in the week before observer arrival.
  • Another study of hand sanitation practices contrasted antiseptic hand rub (AHR) use among medical personal in five intensive care units both in the presence and absence of an announced observer of sanitary practice. When the official observer was absent, a trained covert observer – a research nurse part of the medical team – also recorded AHR use. The Hawthorne effect had a marked impact on AHR compliance: 29% baseline compliance and 45% compliance in the overt period. (Ruden)
  • The Hawthorne effect was induced in paramedic reporting practices solely by written notification of an upcoming quality-improvement audit, thus indicating that the effect does not always require direct observation or feedback. In this case, post audit announcement, the rate of incomplete paramedic reports declined by 20%. This finding also suggests, of course, a behavioral mechanism behind many Hawthorne effects – the perceived demand for performance. (Campbell)

Many of these reviewed studies look at small samples and are relatively short-term. So persistence of the observed effect is an important open question as well as the interaction between observation and the complexity of the behavior studied.

Here is the list of work attempting to quantify the Hawthorne effect that we have found for the health related field – please add to it if you know of others (in any field) – we’d be very grateful.

Some References

Campbell, JP, VA Maxey, WA Watson. “Hawthorne Effect: Implications for Pre-hospital Research.” Annals of Emergency Medicine, 26.5 (1995): 590-94.

Clasen T, Fabini D, Boisson S, Taneja J, Song J, Aichinger E, Bui A, Dadashi S, Schmidt W, Burt Z, Nelson K. “Making Sanitation Count: Developing and Testing a Device for Assessing Latrine Use in Low-Income Settings.” Environmental Science & Technology 46.6 (2012): 3295-3303.

De Amici, D, C Klersy, F Ramajoli, L Brustia, and P Politi. “Impact of the Hawthorne Effect in a Longitudinal Clinical Study: The Case of Anesthesia.” Controlled Clinical Trials 21 (2000): 103-14.

Eckmanns T, Bessert J, Behnke M, Gastmeier P, Ruden H. “Compliance With Antiseptic Hand Rub Use In Intensive Care Units: The Hawthorne Effect. Infection Control and Hospital Epidemiology”, 27 (2006): 931-934.

PH Feil, JS Grauer, CC Gadbury-Amyot, K Kula, MD McCunniff, “Intentional use of the Hawthorne effect to improve oral hygiene compliance in orthodontic patients,” Journal of Dental Education, 66 (2002): 1129-1135.

Grol, RP, WH Verstappen, T van der Weijden, G Riet. “Block Design Allowed For Control Of The Hawthorne Effect In A Randomized Controlled Trial Of Test Ordering.” Journal of Clinical Epidemiology, 57 (2004): 1119-1123.

Kohli E, Ptak J, Smith R, et al. “Variability in the Hawthorne effect with regard to hand hygiene practices: independent advantages of overt and covert observers.” PloS ONE, 8 (2013):353746

Leonard, KL. “Is patient satisfaction sensitive to changes in the quality of care? An exploitation of the Hawthorne effect.” Journal of Health Economics, 27 (2008): 444-459.

Leonard, KL, and MC Masatu. “Outpatient Process Quality Evaluation and the Hawthorne Effect.” Social Science & Medicine 63 (2006): 2330-340.

Leonard, KL, and MC Masatu. “Using the Hawthorne Effect to Examine the Gap between a Doctor’s Best Possible Practice and Actual Performance.” Journal of Development Economics 93.2 (2010): 226-34.

McCarney, R, J Warner, S Iliffe, R van Haselen, M Griffin, P Fisher, “The Hawthorne Effect: a Randomised Controlled Trial.” BMC Medical Research Methodology 7 (2007):  30

McGlynn, EA, R Mangione-Smith, M Elliott, & L McDonald. “An Observational Study of Antibiotic Prescribing Behavior and the Hawthorne Effect.” Health Services Research, 37 (2002), 1603-1623.

Fernald, DH, L Coombs, L DeAlleaume, D West, B Parnes. “An Assessment of the Hawthorne Effect in Practice-based Research.” The Journal of the American Board of Family Medicine, 25 (2012): 83-86.

Srigley, J, C Furness, G. Baker, M Gardam. “Quantification of the Hawthorne effect in hand hygiene compliance monitoring using an electronic monitoring system: A retrospective cohort study.” The International Journal of Healthcare Improvement, 10 (2014): 1-7.

Why evaluations fail: The importance of good monitoring (DCED, 2014)

Adam Kessler and Jim Tanburn, August 2014, Donor Committee for Enterprise Development (DCED). 9 pages. Available as pdf.

Introduction:  A development programme without a strong internal monitoring system often cannot be effectively evaluated. The DCED Standard for Results Measurement is a widely-used monitoring framework, and this document discusses how it relates to external evaluations. Why should evaluators be interested in monitoring systems? How can the DCED Standard support evaluations, and vice versa? Who is responsible for what, and what are the expectations of each? This document expands previous work by the UK Department for International Development (DFID).

This document is relevant for evaluators, those commissioning evaluations, and practitioners in  programmes using the DCED Standard and undergoing an evaluation. It provides a basis for dialogue with the evaluation community; the aims of that dialogue are to identify sources of evaluation expertise
available to support programmes using the DCED Standard, and to promote the Standard to programmes needing to improve their monitoring system. We would welcome further discussions on the topic, and invite you to contact us at Results@Enterprise-Development.org with any questions or comments.

Contents
1 Introduction
2 Why should evaluators be interested in monitoring?
2.1 Good monitoring is essential for effective management
2.2 Good monitoring is essential for effective evaluation
2.3 Some evaluation methodologies incorporate monitoring
3 What is the DCED Standard for Results Measurement?
4 How does the DCED Standard support evaluation?
4.1 The DCED Standard promotes clear theories of change
4.2 The DCED Standard provides additional data to test the theory of change
5 How do evaluations supplement the DCED Standard?
5.1 Evaluations are independent
5.2 Evaluations have more expertise and larger budgets
5.3 Evaluations can examine broader effects
5.4 Evaluations and the DCED Standard are for different audiences
6 Division of responsibilities between evaluator and programme team
7 Key References and further reading

%d bloggers like this: