M&E Software: A List

Well, the beginnings of a list…

PLEASE NOTE: No guarantee can be given about the accuracy of information provided on the linked websites about the M&E software concerned, and its providers. Please proceed with due caution when downloading any executable programs.

Contents on this page: Stand alone systemsOnline systems | Survey supporting software | Sector specific tools | Qualitative data analysis | Data mining / Predictive ModellingProgram Logic / Theory of Change modelingDynamic models | Excel-based tools | Uncategorised and misc other

If you have any advice or opinions on any of the applications below, please tell us more via this survey.

Stand-alone systems

  • AidProject M+E for Donor-funded aid projects
  • Flamingo and Monitoring Organiser: “In order to implement FLAMINGO, it is crucial to first define the inputs (or resources available), activities, outputs and outcomes”
  • HIV/AIDS  Data Capturing And Reporting Platform[Monitoring and Evaluation System]
  • PacPlan: “Results-Based Planning, Monitoring and Evaluation Software and Process Solution”
  • Prome Web: A project management, monitoring and evaluation software. Adapted for aid projects in developing countries
  • Sigmah: “humanitarian project management open source software”

Online systems

  • Activity Info: “an online humanitarian project monitoring tool, which helps humanitarian organizations to collect, manage, map and analyze indicators. ActivityInfo has been developed to simplify reporting and allow for real-time monitoring”
  • AKVO: “a paid-for platform that covers data collection, analysis, visualisation and reporting”
  • DevResults: “web-based project management tool specially designed for the international development community.” Including M&E, mapping, budgeting, checklists, forms, and collaboration facilities.
  • Granity: “Management and reporting software for Not-for-profits Making transparency easy”
  • IndiKit: Guidance on SMART indicators for relief and development programmes
  • Kashana: An open sourced, web-based Monitoring, Evaluation & Learning (MEL) product for development projects and organisations
  • Kinaki: “Kinaki is a unique and intuitive project design, data collection, analysis, reporting and sharing tool”
  • KI-PROJECTS™ MONITORING AND EVALUATION SOFTWARE:
  • Kobo Toolbox: “a free, more user-friendly way to deploy Open Data Kit surveys. It was developed with humanitarian purposes in mind, but could be used in various contexts (and not just for surveys). There is an Android data collection app that works offline”
  • Logalto:”Collaborative Web-Based Software for Monitoring and Evaluation of International Development Projects”
  • M&E Online: “Web-based monitoring and evaluation software tool”
  • Monitoring and Evaluation Online: Online Monitoring and Evaluation Software Tool
  • SmartME: “SmartME is a tried and tested comprehensive Fund Management and M&E software platform to manage funds better”
  • SocialWell: “SocialWellNet is a digital platform that empowers organizations deliver better social and public services. It also helps take better decisions, by automating data collection and analysis, using SocialWellNet web and mobile apps.”
  • Systmapp: “cloud-based software that uses a patent-pending methodology to connect monitoring, planning, and knowledge management for international development organisations”
  • TCS Aid360: “a web-based system enabling digitisation for the social development sector. It is a modular solution that supports Grant Management, Planning, Monitoring & Evaluation”
  • TolaData “is a program management and M&E platform that helps organisations create data-driven impact through the adaptive and timely management of projects”
  • WebMo: Web-based project monitoring for development cooperation

Survey supporting software

  • CommCare: a mobile data collection platform.
  • EthnoCorder is mobile multimedia survey software for your iPhone
  • HarvestYourData: iPad & Android Survey App for Mobile Offline Data Collection
  • KoBoToolbox is a suite of tools for field data collection for use in challenging environments. Free and open source
  • Magpi (formerly EpiSurvey)  – provides tools for mobile data collection, messaging and visualisation, lets anyone create an account, design forms, download them to phones, and start collecting data in minutes, for free.
  • Open Data Kit (ODK) is a free and open-source set of tools which help organizations author, field, and manage mobile data collection solution
  • REDCap,a secure web application for building and managing online surveys and databases… specifically geared to support online or offline data capture for research studies and operations
  • Sensemaker(c) “links micro-narratives with human sense-making to create advanced decision support, research and monitoring capability in both large and small organisations.”
  • Comparisons

Sector-specific tools

  • Mwater for WASH, which explicitly aims to make the data (in this case water quality). Free and open source
  • Adaptive Management Software for Conservation projects. https://www.miradi.org/

Qualitative data analysis

  • Dedooose, A cross-platform app for analyzing qualitative and mixed methods research with text, photos, audio, videos, spreadsheet data and more
  • Nvivo, powerful software for qualitative data analysis.
  • HyperRESEARCH “…gives you complete access and control, with keyword coding, mind-mapping tools, theory building and much more”.
  • Impact Mapper: “A new online software tool to track trends in stories and data related to social change”

Data mining / predictive modeling

  • RapidMiner Studio. Free and paid for versions. Data Access (Connect to any data source, any format, at any scale), Data Exploration (Quickly discover patterns or data quality issues). Data Blending (Create the optimal data set for predictive analysis), Data Cleansing (Expertly cleanse data for advanced algorithms), Modeling (Efficiently build and delivers better models faster), Validation (Confidently & accurately estimate model performance)
  • BigML. Free and paid for versions. Online service. “Machine learning made easy”
  • EvalC3: Tools for exploring and evaluating complex causal configurations, developed by Rick Davies (Editor of MandE NEWS). Free and available with Skype video support

Program Logic / Theory of Change modeling / Diagramming

  • Changeroo: “Changeroo assists organisations, programs and projects with a social mission to develop and manage high-quality Theories of Change”
  • Coggle:The clear way to share complex information
  • DAGitty: ” a browser-based environment for creating, editing, and analyzing causal models (also known as directed acyclic graphs or causal Bayesian networks)”
  • Decision Explorer: a  tool for managing “soft” issues – the qualitative information that surrounds complex or uncertain situations.
  • DCED’s Evidence Framework – more a way of using a website than software as such, but definitely an approach that is replicable by others.
  • DoView – Visual outcomes and results planning
  • Draw.io:
  • Dylomo: ” a free* web-based tool that you can use to build and present program logic models that you can interact with”
  • IdeaTree – Simultaneous Collaboration & Brainstorming Using Mind Maps
  • Kumu: a powerful data visualization platform that helps you organize complex information into interactive relationship maps.
  • Logframer 1.0 “a free project management application for projects based on the logical framework method”
  • LucidChart: Diagrams done right. Diagram and collaborate anytime on any device
  • Netway: a cyberinfrastructure designed to support collaboration on the development of program models and evaluation plans, provide connection to a virtual community of related programs, outcomes, measures and practitioners, and to provide quick access to resources on evaluation planning
  • Omnigraffle: for creating precise, beautiful graphics: website wireframes, electrical systems, family trees and maps of software classes
  • Theory maker: a free web app by Steve Powell for making any kind of causal diagram, i.e. a diagram which uses arrows to say what contributes to what.
  • TOCO – Theory of Change Online. A free version is available.
  • Visual Understanding Environment (VUE): open source ‘mind mapping’ freeware from Tufts Univ.
  • yEd – diagram editor that can be used to generate drawings of diagrams.  FREE. PS: There is now a web-based version of this excellent network drawing application

Dynamic models

  • CCTools: Map and steer complex systems, using Fuzzy Cognitive Maps and others [ This site is currently under reconstruction]
  • Loopy: A tool for thinking in systems
  • Mental Modeller: FCM modeling software that helps individuals and communities capture their knowledge in a standardized format that can be used for scenario analysis.
  • FCM Expert: Experimenting tools for Fuzzy Cognitive Maps
  • FCMapper: the first available FCM analysis tool based on MS Excel and FREE for non-commercial use.
  • FSDM: Fuzzy Systems Dynamics Model Implemented with a Graphical User Interface

Excel-based tools

  • EvalC3: …tools for developing, exploring and evaluating predictive models of expected outcomes, developed by Rick Davies (Editor of MandE NEWS). Free and available with Skype video support

Uncategorised yet

  • OpenRefine: Formerly called Google Refine is a powerful tool for working with messy data: cleaning it; transforming it from one format into another; and extending it with web services and external data.
  • Overview is an open-source tool originally designed to help journalists find stories in large numbers of documents, by automatically sorting them according to topic and providing a fast visualization and reading interface. It’s also used for qualitative research, social media conversation analysis, legal document review, digital humanities, and more. Overview does at least three things really well.
Other lists
Other other

On evaluation quality standards: A List

 

The beginnings of a list. Please suggest others by using the Comment facility below

Normative statements:

Standards for specific methods (and fields):

Meta-evaluations:

  • Are Sida Evaluations Good Enough?An Assessment of 34 Evaluation Reports” by Kim Forss, Evert Vedung, Stein Erik Kruse,Agnes Mwaiselage, Anna Nilsdotter, Sida Studies in Evaluation 2008:1  See especially Section 6: Conclusion, 6.1 Revisiting the Quality Questions, 6.2 Why are there Quality Problems with Evaluations?, 6.3 How can the Quality of Evaluations be Improved?, 6.4 Direction of Future Studies. RD Comment:  This study has annexes with empirical data on the quality attributes of  34 evaluation reports published in the Sida Evaluations series between 2003 and 2005. It BEGS a follow up study to see if/how these various quality ratings correlate in any way with the subsequent use of the evaluation reports. Could Sida pursuaded to do something like this?

Ethics focused

  • Australasian Evaluation Society

Journal articles

Checklists:

  • Evaluation checklists prepared by the Western Michegan University ,covering Evaluation Management, Evaluation Models, Evaluation Values and Criteria, Metaevaluation, Evaluation Capacity Building / Institutionalization, and Checklist Creation

Other lists:

A list of M&E training providers

Update 2014 12 20: The contents of this page have become woefully out of date and it would be more than a full time job to keep it up to date.

My advice is as now as follows:

If you are looking for M&E training opportunities visit the MandE NEWS Training Forum, which lists all upcoming training events. There are many training providers listed there, along with links to their websites

Please also consider taking part in the online survey of training needs.

If you are a training provider, please look at the cumulative results to date of that survey.

I have now deleted all the previous training providers that were shown below

Value for money: A list

Hopefully, the start of a short but useful bibliography, listed in chronological order.

Please suggest additional documents by using the Comment facility below.  If you have ideas on how Value for Money can be clearly defined and usefully measured please also use the Comment facility below..

For the Editor’s own suggestion, go to the bottom of this page

2015

2014

2013

2012

2011

  • ICAI’s Approach to Effectiveness and Value for Money, November 2011. See also Rick Davies comments on same
  • Value for Money and international development: Deconstructing some myths to promote more constructive discussion. OECD Consultation Draft. October 2011
  • What does ‘value for money’ really mean? CAFOD, October 2011
  • Value for Money: Guideline, NZAID, updated July 2011
  • DFID’s Approach to Value for Money (VfM), July 2011
  • DFID Briefing Note: Indicators and VFM in Governance Programming July 2011.  INTRODUCTION: This note provides advice to DFID staff on: i. governance indicator best practice, and ii. measuring the Value for Money of governance programmes. This note is for use primarily by DFID governance advisers, as well as other DFID staff designing programmes with governance elements. The note provides a framework for consideration in Business Case design that relates to governance activity.  On Value for Money (VFM) in particular, this guidance is only intended as ‘interim’ whilst further research is undertaken. During 2011-2012, DFID will work to determine best practice and establish agreed approaches and mechanisms. This guidance will therefore be updated accordingly subject to research findings as they are made available.  This note was drawn up by DFID staff. It builds on 2 research reports by ITAD, submitted in December 2010 and January 2011 respectively, as well as DFID’s internal Business Case guidance. There are 2 main sections: Section 1: Governance Indicators and Section 2: Value for Money in Governance Programming. The note ends with 10 Top Tips on Business Case preparation.
  • DFID is developing ” Guidance for DFID country offices on maximising VfM in cash transfer programmes“. July 2011. Objective:To provide guidance to DFID country offices on measuring value for money in cash transfer programmes through the rigorous analysis of costs and benefits, as far as possible, at the design stage and through programme implementation and completion.  This project is driven by DFID’s expansion of support to cash transfer programmes, its strong emphasis on ensuring programmes are delivering value for money, and strong country office demand for specific advice and guidance” (ToRs)
  • Value for Money: Current Approaches and Evolving Debates. Antinoja Emmi, Eskiocak Ozlem, Kjennerud Maja, Rozenkopf Ilan,  Schatz Florian, LSE, London, May 2011. 43 pages. “NGOs have increasingly been asked by donors to demonstrate their Value for Money (VfM).This report analyses this demand across a number of dimensions and intends to lay out the interpretation of different stakeholders. After contextualising the debate internationally and nationally, a conceptual discussion of possible ways of defining and measuring VfM is conducted, followed by a technical analysis of different approaches and measurement techniques adopted by stakeholders. Finally, opportunities and caveats of measuring VfM are discussed. The report draws heavily on information gained through a total of seventeen interviews with representatives of NGOs, consultancies, think tanks and academic institutions.”
  • Independent Commission for Aid Impact – Work Plan, May 2011: “We have not yet agreed our own definition of terms such as “value for money” and “aid effectiveness”. These are complex issues which are currently under much debate. In the case of value for money we believe that this should include long-term impact and effectiveness. We intend to commission our contractor to help us in our consideration of these matters.”
  • The Guardian, Madeleine Bunting,11th April 2011 “Value for money is not compatible with increasing aid to ‘fragile states’. The two big ideas from the UK’s Department for International Development are destined for collision”
  • NAO report on DFID Financial Management, April 2011. See the concluding section of the Executive Summary, titled Conclusion on value for money:
    • “We recognise that the Department has been improving its core financial management and has also been strengthening its focus on value for money at all levels of the organisation, including through a step change in its approach to the strategic allocation of resources based on expected results. Important building blocks have been put in place, but key gaps in financial management maturity remain. The changes the Department has introduced to-date are positive, and provide a platform to address the challenges that will come with its increased spending.”
    • At present, however, the Department’s financial management is not mature. The Department’s forecasting remains inaccurate and its risk management is not yet fully embedded. Weaknesses in the measurement of value for money at project level, variability in the quality and coverage of data, and lack of integration in core systems, mean that the Department cannot assess important aspects of value for money of the aid it has delivered, at an aggregated level. The Department now needs to develop a coherent single strategy to address the weaknesses identified and the key risks to meeting its objectives.”
  • DFID’s March 2011, Multilateral Aid Review, “was commissioned to assess the value for money for UK aid of funding through multilateral organisations”. “All were assessed against the same set of criteria, interpreted flexibly to fit with their different circumstances, but always grounded in the best available evidence. Together the criteria capture the value for money for UK aid of the whole of each organisation. The methodology was independently validated and quality assured by two of the UK’s leading development experts. The assessment framework included criteria which relate directly to the focus and impact of an organisation on the UK’s development and humanitarian objectives– such as whether or not they are playing a critical role in line with their mandate, what this means in terms of results achieved on the ground, their focus on girls and women, their ability to work in fragile states, their attention to climate change and environmental sustainability, and their focus on poor countries. These criteria were grouped together into an index called “Contribution to UK development objectives.  The framework also included criteria which relate to the organisations’ behaviours and values that will drive the very best performance – such as transparency, whether or not cost and value consciousness and ambition for results are driving forces in the organisation, whether there are sound management and accountability systems, whether the organisations work well in partnership with others and whether or not financial resource management systems and instruments help to maximise impact. These were grouped together into an index called “Organisational strengths”. Value for money for UK aid was assessed on the basis of performance against both indices. So, for example, organisations with a strong overall performance against both indices were judged to offer very good value for money for UK aid, while those with a weak or unsatisfactory performance against both indices were deemed to offer poor value for money.”
    • [RD comment] In the methodology chapter the authors explain / claim that this approach is based on a 3E view that seeks to give attention to the whole “value for money chain” (nee causal chain), from inputs to impacts (which is discussed below). Reading the rest of that chapter, I am not convinced, I think the connection is tenuous, and what exists here is a new interpretation of Value for Money that will not be widely used. That said, I dont envy the task the authors of this report were faced with.
    • [RD comment]The Bilateral Aid Review makes copious references to Value for Money, but there is no substantive discussion of what it means anywhere in the review. Annex D includes a proposal format which includes a section for providing  Value for Money information in 200 words. This includes the following fields, which are presumably explained elsewhere: Qualitative judgement of vfm, vfm metrics (including cost-benefit measures), Unit costs, Scalability, Comparators, Overall VfM RAG rating: red/amber/green.
  • Aid effectiveness and value for money aid: complementary or divergent agendas as we head towards HLF-4. (March 2011)  This ODI, ActionAid and UK Aid Network public event was called “to reflect on approaches dominating the debate in advance of the OECD’s 4th High Level Forum on Aid Effectiveness (HLF-4); explore the degree to which they represent complimentary or divergent agendas; and discuss how they might combine to help ensure that HLF-4 is a turning point in the future impact of aid.” The presentations of three of the four speakers are available on this site. Unfortunately DFID’s presentation, by Liz Ditchburn– Director, Value for Money, DFID, is not available.
  • BOND Value for Money event (3 February 2011). “Bond hosted a half day workshop to explore this issue in more depth. This was an opportunity to take stock of the debates on Value for Money in the sector, to hear from organisations that have trialled approaches to Value for Money and to learn more about DFID’s interpretation of Value for Money from both technical and policy perspectives.” Presentations were made by (and are available): Oxfam, VSO, WaterAid, HIV/AIDS Aliliance, and DFID (Jo Abbot, Deputy Head Civil Society Department). There was also a prior BOND event in January 2011 on Value for Money, and presentations are also available, including an undated National Audit Office Analytical framework for assessing Value for Money
    • [RD Comment]The DFID presentation on “Value for Money and Civil Society”  is notable in the ways that it seeks to discourage NGOs from over investing efforts to measure Value for Money, and its emphasises on the continuity of DFIDs approach to assessing CSO proposals. The explanation of Value for Money is brief, captured in two statements: “optimal use of resources to get desired outcomes” and “maximum benefit for the resources requested”. To me this reads as efficiency and cost-effectiveness.
  • The Independent Commission for Aid Impact (ICAI)’s January 2011online consultation contrasts Value for Money reviews with Evaluations, Reviews and Investigations, as follows.
    • Value for money reviews: judgements on whether value for money has been secured in the area under examination. Value for money reviews will focus on the use of resources for development interventions.
    • Evaluations: the systematic and objective assessment of an on-going or complete development intervention, its design, implementation and results. Evaluations will focus on the outcome of development interventions.
    • Reviews: assessments of the performance of an intervention, periodically or on an ad hoc basis. Reviews tend to look at operational aspects and focus on the effectiveness of the processes used for development interventions.
    • Investigations:a formal inquiry focusing on issues around fraud and corruption.
      • [RD comment] The ICAI seems to take a narrower view than the National Audit Office, focusing on economy and efficiency and leaving out effectiveness – which within its perspective would be covered by evaluations.

 

2010

  • Measuring the Impact and Value for Money of Governance & Conflict Programmes Final Report December 2010 by Chris Barnett, Julian Barr, Angela Christie,  Belinda Duff, and Shaun Hext. “The specific objective stated for our work on value for money (VFM) in the Terms of Reference was: “To set out how value for money can best be measured in governance and conflict programming, and whether the suggested indicators have a role in this or not”. This objective was taken to involve three core tasks: first, developing a value for money approach that applies to both the full spectrum of governance programmes, and those programmes undertaken in conflict-affected and failed or failing states; second, that the role of a set of suggested indicators should be explored and examined for their utility in this approach, and, further, that existing value for money frameworks (such as the National Audit Office’s use of the 3Es of ‘economy, efficiency and effectiveness’) should be incorporated, as outlined in the Terms of Reference.”
  • Value for Money: How are other donors approaching ‘value for money’ in their aid programming? Question and answer on the Governance and Social Development Resource Centre Help Desk, 17 September 2010.
  • Value for Money (VfM) in International Development NEF Consulting Discussion Paper, September 2010. Some selective quotes: “While the HM Treasury Guidance provides principles for VfM assessments, there is currently limited guidance on how to operationalise these in the international development sector or public sector more generally. This has led to confusion about how VfM assessments should be carried out and seen the proliferation of a number of different approaches.” …”The HM Treasury guidance should inform the VfM framework of any publicly-funded NGO in the development sector. The dark blue arrow in Figure 1 shows the key relationship that needs to be assessed to determine VfM. In short, this defines VfM as: VfM = value of positive + negative outcomes / investment (or cost)”
  • [RD Comment:] Well now, having that formula makes it so much easier (not), all we have to do is find the top values, add them up, then divide by the bottom value :-(
  • What is Value for Money? (July 2010) by the Improvement Network (Audit Commission, Chartered Institute of Public Finance and Accountancy (CIPFA), Improvement and Development Agency (IDeA), Leadership Centre for Local Government, NHS Institute for Innovation and Improvement).  “VfM is about achieving the right local balance between economy, efficiency and effectiveness, the 3Es – spending less, spending well and spending wisely” These three attributes are each related to different stages of aid delivery, from inputs to outcomes, via this diagram.
  • [RD comment]: Reading this useful page raises two interesting questions. Firstly, how does this framework relate to the OECD/DAC evaluation criteria? Is it displacing them, as far as DFID is concerned? It appears so, given its appearance in the Terms of Reference for the contractors who will do the evaluation work for the new Independent Commission for Aid Impact. Ironically, the Improvement Network makes the following comments about the third E, (effectiveness) which suggests that the DAC criteria may be re-emerging within this new framework: “Outcomes should be equitable across communities, so effectiveness measures should include aspects of equity, as well as quality. Sustainability is also an increasingly important aspect of effectiveness.” The second interesting question is how Value for Money is measured in aggregate, taking into account all three Es. Part of the challenge is with effectiveness, where it is noted that effectivenessis a measure of the impact that has been achieved, which can be either quantitative or qualitative.” Then there is the notion that Value for Money is about a “balance” of the three Es. “VfM is high when there is an optimum balance between all three elements – when costs are relatively low, productivity is high and successful outcomes have been achieved.” On the route to that heaven there are multiple possible combinations of states of economy (+,-), efficiency (+,-) and effectiveness (+,-). There is no one desired route or ranking. Because of these difficulties Sod’s Law will probably apply and attention will focus on what is easiest to measure i.e. economy or at the most, efficiency. This approach seems to be evident in earlier government statements about DFID: “International Development Minister Gareth Thomas yesterday called for a push on value for money in the UN system with a target of 25% efficiency savings.”….”The UK is holding to its aid commitments of 0.7 % of GNI.  But for the past five years we have been expected to cut 5% from our administration or staffing costs across Government. 5% – year on year”

 

2007

 

2003

 

The Editor’s suggestion

1. Dont seek to create an absolute measure of the Value for Money for a single activity/project/program/intervention

2. Instead, create a relative measure of  the VfM found within a portfolio of activities, by using a rank correlation. [This measure then be used to compare VfM across different types of portfolios]

  • 1. Rank the entities (activities/projects…) by cost of the inputs, and
    • Be transparent about which costs were included/excluded e.g partner’s own costs, other donor contributions etc,)
  • 2. Rank the the same set of entities by their perceived effectiveness or impact (depending on the time span of interest)
    • Ideally this ranking would be done through a participatory ranking process (see Refs below), and information would be available on the stakeholders who were involved
    • Where multiple stakeholder groups were consulted, any aggregation of their rankings would be done using transparent weighting values and information would also be available on the Standard Deviation of the rankings given to the different entities. There is likely to be more agreement across stakeholders on some rankings than others.
    • Supplementary information would be available detailing how stakeholders explained their ranking. This is best elicited through pair comparisons of  adjacent sets of ranked entities.
      • That explanation is likely to include a mix of:
        • some kinds of impacts being more valued by the stakeholders than others, and
        • for a given type of impact there being evidence of more rather than less of that kind of impact, and
        • where a given impact is on the same scale, there being better evidence of that impact
  • 3. Calculate the rank correlation between the two sets of rankings. The results will range between these two extremities:
    • A high positive correlation (e.g. +0.90): here the highest impact is associated with the highest cost ranking, and the lowest impact is associated with the lowest cost ranking. Results are proportionate to investments. This would be the more preferred finding, compared to
    • A high negative correlation (e.g -0.90): here the highest impact is associated with lowest cost ranking, but the lowest impact is associated with the highest cost ranking. Here the more you increase your investment the less you gain, This is the worst possible outcome.
    • In between will be correlations closer to zero, where there is no evident relationship between cost and impact ranking.
  • 4. Opportunities for improvement would be found by doing case studies of “outliers”, found when the two rankings are plotted against each other in a graph. Specifically:
    • Positive cases, whose rank position on cost is conspicuosly lower than their rank position on impact.
    • Negative cases, whose rank position on impact is conspicuosly lower than their rank position on cost.

PS: It would be important to  disclose the number of entities that have been ranked. The more entities there are being ranked the more precise the rank correlation will be. However, the more entities there are to rank the harder it will be for participants and the more likely they will use tied ranks. A minimum of seven rankable entities would seem desirable.

For more on participatory ranking methods see:

PS: There is a UNISTAT plugin for Excel that will produce rank correlations, plus much more.

Updated MSC bibliography

PLEASE NOTE. The bibliography below has now been superseded by a more comprehensive bibliography here. This now includes pdf copies of many of the papers plus a search facility. It will continue to be updated

This (now older) page is intended to provide  an update of the bibliography in the 2005 Most Significant Change technique (MSC) Users Guide

Please feel free to suggest additions to this list, through the Comment facility below, or by emailing the editor (Rick Davies)

Papers

 

Powerpoints

  • Seven sets of slides, used for 2 day MSC training in Delhi, 2008 by Rick Davies . Available on request , on condition of willingness to share any adaptations made

YouTube video

Other

 

Guidance on Terms of Reference for an Evaluation: A List

This is the beginning of a new page that will list various sources of guidance on the development of Terms of Reference for an evaluation.

If you have suggestions for any additions (or edits) to this list please use the Comment function below.

Please also see the hundreds of examples of actual ToRs (and related docs) in the MandE NEWS Jobs Forum

PS: Jim Rugh has advised me (5 June 2010) that “two colleagues at the Evaluation Center at Western Michigan University are undertaking an extensive review of RFPs / ToRs they’ve seen posted on various listservs; they intend to publish a synthesis, critique and recommendations for criteria to make them more realistic and appropriate.

Card sorting methods: A List

Card / pile sorting is a simple and useful means of eliciting and aggregating qualitative data, in a participatory manner. In anthropology, it is described as pile sorting, and is used for domain analysis, in the field of cognitive anthropology. In website design it is known as card sorting.

Anthropology
Website design
Software
  • OptimalSort: Online card sorting software:
  • SynCapsV2:  For the analysis of the results of physical card sorts, which can be downloaded and used on a desktop/laptop
  • UsabilitiTest: Our Cards Sorting tool supports Closed, Open and Hybrid testing, and our Prioritization Matrix is the only such tool, currently online.
  • MozDev.org: uzCardSort is an open source, MPL licensed, Mozilla based tool for conducting and analyzing card sorts.
  • XSort: is a free card sorting application for Mac, aimed at user experience professionals and social scientists.

 

Identifiying and documenting “Lessons Learned”: A list of references

Editor’s note:

This is a very provisional list of documents on the subject of Lessons Learned, what they are, and how to identify and document them. If you have other documents that you think should be included in this list, please make a comment below.

Note: This is not a list of references on the wider topic of learning, or on the contents of the Lessons Learned.

2014

  • EVALUATION LESSONS LEARNED AND EMERGING GOOD PRACTICES. ILO Guidance Note No.3, April 2014. April 25, 2014 “The purpose of this guidance note is to provide background on definitions and usages of lessons learned applied by the ILO Evaluation Unit. Intended users of this guidance note are evaluation managers and any staff in project design or technically backstopping the evaluation process. There is separate guidance provided for  consultants on how to identify, formulate and present these findings in reports”

2012

2011

  • The NATO Lessons Learned Handbook. Second Edition, September 2011 “Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.” – Douglas Adams, author of The Hitchhiker’s Guide to the Galaxy”

2009

2007

  • Lessons Learned from Evaluation M. J. Spilsbury, C. Perch, S. Norgbey, G. Rauniyar and C.Battaglino Special Study Paper Number 2 A Platform for Sharing Knowledge. United Nations Environment Programme. January,2007. Lessons presented in evaluation reports are often of highly variable quality and limited utility. They are “often platitudes borne of a felt need to demonstrate engagement in the ‘knowledge society’ or simply to satisfy the specified evaluation requirements”. Even where high quality lessons are developed, they are seldom communicated effectively to their intended audiences. In order to enhance the quality of lessons, improve their utilisation, and aid their dissemination and communication, a Framework of Lessons from evaluation is presented in this paper. The framework consists of common problems, issues and or constraints to which evaluation lessons relate using ‘Mind- mapping’ software and ‘problem tree’ techniques. Evaluation lessons were systematically classified within the resulting Framework of Lessons. The proposed framework of evaluation lessons is best used within the context of interactive ‘face-to-face’ communication with project / programme managers to ensure that evaluation lessons truly become ‘lessons learned’.

2005

2004

  • Criteria for Lessons Learned (LL) A Presentation for the 4th Annual CMMI Technology Conference and User Group , by  Thomas R. Cowles Raytheon Space and Airborne Systems Tuesday, November 16, 2004

2001

  • M. Q. Patton (2001) Evaluation, Knowledge Management, Best Practices, and High Quality Lessons Learned American Journal of Evaluation, 22(3), 2001. Abstract:  Discusses lessons to be learned from evaluation and best practices in evaluation and some ways to bring increased rigor to evaluators’ use of those terms. Suggests that “best” practices is a term to avoid, with “better” or “effective” being more realistic, and calls for more specificity when discussing lessons to be derived. (full text not yet found on line)

1997

If you know of other relevant documents and web pages, please tell us, by using the Comment facility below

MandE NEWS email List

If you want to talk with others about monitoring and evaluation then join the M&E NEWS email list. (< click this link). It has more than 2200+ members worldwide, and is growing in size every day. You can access monthly summaries of the 2007 and 2008  postings here.

A list of M&E email lists

Please Note:

  • If you want to add a new M&E email list, please use the Comment facility at the bottom of this page.
  • If you want to join any of these email lists, clock on the link for that list (Don’t use the Comment facility)

General purpose lists

  • MandENEWS
    2600+ Members, Archives: Membership required. The MandE NEWS mailing list is part of the MandE NEWS website at www.mande.co.uk . Visitors to the website are invited to use the mailing list to exchange information with each other about monitoring and evaluation issues, especially as they relate to international development aid. The Editor of MandE NEWS will also use the mailing list to inform list members about significant updates to the MandE NEWS website. The MandE NEWS mailing list is managed by the Editor, Rick Davies (contact email …(more)
  • Eval-Net [link not working] 858 members as of 2006 (please help update this number) Knowledge sharing and learning continue to be a top corporate priority for UNDP. The purpose of the Evaluation Network is to strengthen UNDP’s evaluation knowledge base by disseminating good practices and lessons learned on monitoring and evaluation to a broad constituency and to foster results-based performance at both country and corporate levels. It will also help build UNDP staff capacity in measuring and assessing results. This network specifically aims to: Share and exchange experiences and knowledge and lessons distilled from evaluative work relating to programmes and projects; Mainstream results orientation within the work of UNDP’s six practice areas; Provide a forum for UNDP staff to share and deepen their knowledge of monitoring and evaluation practices and methodologies. This network is open to all UNDP staff members interested in and working on measuring and assessing results and who want to contribute and build their capacity in this area. (posted 16/12/06)
  • XCeval
    880+ Members, Archives: Membership required XCeval is a listserv for persons interested in issues associated with international and cross-cultural evaluation. Initially set up for the International and Cross-Cultural Topical Interest Group of the American Evaluation Association. Many of the postings (average 34/month) are announcements of short-term consultancies or full-time positions in international M&E-related jobs. Also exchanges of ideas of current interest to persons involved in the evaluation of international development. (updated 15/12/06)
  • American Evaluation Association Electronic Lists
    • EVALTALK Listserv 3100+ members An open, unmoderated list for general discussion of evaluation and associated issues sponsored by the American Evaluation Association. To subscribe, send mail to LISTSERV@BAMA.UA.EDU with the command (paste it!): SUBSCRIBE EVALTALK
    • EVALINFO Sponsored by American Evaluation Association (AEA) as the official electronic network for distribution of information to organization members and interested parties. Anyone can subscribe and receive mailings but non-AEA members cannot post to the list. To subscribe, send an e-mail to LISTSERVE@UA1VM.UA.EDU with the message: SUBSCRIBE EVALINFO <Firstname> <Lastname>

Email lists focusing on specific evaluation themes, issues, or methods

  • AIMEnet Listserv 1000+ members, Archives, Membership required In 2004, MEASURE Evaluation teamed with the U.S. President’s Emergency Plan for AIDS Relief, USAID, CDC, UNAIDS, the World Health Organization, The Global Fund to Fight AIDS, Tuberculosis and Malaria, the World Bank Group, and UNICEF to create the HIV/AIDS Monitoring and Evaluation Network (AIMEnet) listserv. AIMEnet was initially created so we could stay in touch with participants from several Expanded HIV/AIDS Response M&E workshops. Today, the AIMEnet listserv has been broadened to include anyone interested in sharing technical experiences, tools and information in monitoring and evaluation (M&E) of HIV/AIDS programs around the world.
  • Most Significant Changes” technique. 1100+ members. Archives, Membership required. This is moderated by Rick Davies and Jessica Dart (Melbourne). This egroup was formed to promote discussion about the use of an innovative method of monitoring, called the “Most Significant Changes” approach. This is a non-indicator based approach to monitoring, making use of a diversity of narrative accounts of change which are subject to an iterated, open and accountable selection process. It has already been applied in developed and less developed economies, in partipatory rural development projects, agricultural extension projects, educational settings and mainstream human services delivery. Through discussion we hope to promote the wider use of the method, and further innovation and development in the method itself. Most Significant Changes monitoring is different from common monitoring practice in at least four respects: (a) The focus is on the unexpected, (b) Information about those events is documented using text rather than numbers, (c) Analysis of that information is through the use of explicit value judgements, (d) Aggregation of information and analysis takes place through a structured social process. This egroup will act both as a repository of information about people’s experiences with the MSC method to date, and as a nursery for ideas of how to take the method further- into new environments, where there are new opportunities and constraints.
  • Outcome Mapping Learning Community 700+ members globally, as of 2008. Public. Outcome Mapping is an innovative approach to project and programme planning, monitoring and evaluation with a strong focus on participatory learning. The major innovation is the emphasis on the behaviour change of key actors with whom the programme has an influence, rather than focussing on changes in state that may or may not be attributed to the programme. The community was set up to support users of the methodology and those interested in the concepts behind it. Come and discuss the theory, get advice on applying OM in your case and meet others interested in this approach to P,M&E. See the community brochure for more information or contact Simon Hearn.

  • Systems in Evaluation Discussion List
    290+ members, Archives. EVAL-SYS@LISTS.EVALUATION.WMICH.EDU

  • Theory-Based_Evaluation
    390+ Members, Archives: Public Welcome to Theory-Based Evaluation dicsussion list! In the context where evaluation is challenged by attribution, complex systems and the need for evidences based policies, theory-based evaluation is seen as an effective response to these challenges. The purpose of this list, is to provide a forum where practitioners and scholars can exchange and share ideas, lessons and methods associated with theory-based evaluation. Hence, this discussion list is dedicated to the evaluation of Institutional …(more)
  • Pelican Initiative: Platform for Evidence-based Learning & Communications for Social Change 700+ Members, Archives. Membership required. This platform seeks to bring together development practitioners from different disciplines, specialists and policy makers to explore this question, share experiences, and to push the agenda further on three themes: * Evidence and learning for policy change; * Learning in organisations and among partners; * Society-wide learning among a multitude of stakeholders.
  • LEAP IMPACT 160+ members, Archives. Membership required Leap Impact aims to improve the institutional performance of monitoring and evaluation practice related to information services, information products and information projects. It is a community of practice open to all individuals/organisations interested in the evaluation of information. LEAP IMPACT is a joint initiative of CTA, IICD, Bellanet, and KIT. It is facilitated by Sarah Cummings (KIT ILS), Neil Pakenham-Walsh (HIF-net-at-WHO) and Shaddy Shadrach (Oneworld South Asia).
  • NetworkEvaluation
    280+ Members, Archives: Membership required.The Network Evaluation mailing is an extension of the Networks section of Monitoring and Evaluation NEWS at www.mande.co.uk The focus of the Network Evaluation mailing list is on the exchange of information about: Methodologies for, and experiences of, the evaluation of networks. Including Networks of individuals, groups and organisations, Both face to face and electronically mediated networks The use of social network analysis in international development aid projects In planning, …(more)
  • PARTICIPATORY MONITORING AND LEARNING
    60+ Members, Archives: Membership required.This group on Participatory Monitoring and Learning (PM&L) has been created to facilitate interaction amongst a group of researchers, practitioners and others interested in the topic of participatory approaches to monitoring, evaluation and learning.
  • ODAfrica
    50+ Members, Archives: Public. Support group for OD Practitioners working for and in Africa. Initiative of OD Practitioners from Tanzania, Uganda, Ghana, South Africa, Angola, Zimbabwe and Zambia who attended a two-year OD Practitioners Formation Programme in 2004/2005.
  • Evaluation Feedback 30+ members, Archives. Membership required. This was moderated by Catherine Cameron, author of Evaluation feedback for effective learning and accountability,
  • EGAD List: Program evaluation, statistics and methodology list 170+ members. To send a message to all the people currently subscribed to the list, just send mail to EGAD@LISTSERV.ARIZONA.EDU. This is called “sending mail to the list”, because you send mail to a single address and LISTSERV makes copies for all the people who have subscribed. This address (egad@listserv.arizona.edu) is also called the “list address”. You must never try to send any command to that address, as it would be distributed to all the people who have subscribed. All commands must be sent to the “LISTSERV address”, listserv@listserv.arizona.edu.
  • Arlist: Action research mailing list.Arlist-L is a medium-volume, multidisciplinary electronic mailing list. It is a moderated forum for the discussion of the theory and practice of action research and related methods. Bibliography of over 50 references on meta-evaluation. References include discussions of technical competence of individual evaluations, critical analyses of evaluations of the impact of evaluations on the less powerful groups, managerial meta-evaluations on the perceived credibility and utility of the evaluation. To subscribe, send an e-mail (no subject) to request@psy.uq.oz.au with the message: SUBSCRIBE ARLIST <Firstname> <Lastname> Or, To subscribe to arlist-L point your browser at http://lists.scu.edu.au/mailman/listinfo/arlist-l

  • EDIAIS Forum (Enterprise Development Impact Assessment Information Service) 160+ members Joining the list: email info@enterprise-impact.org.uk You will then receive an e-mail asking you to confirm your subscription. Once you are a member: You will receive all messages sent to the list. To send a message to the list mail it to: ENT-IMP@enterprise-impact.org.uk – use either Reply to respond to the last contributor only or Reply All and your message will automatically be mailed to all list members.

Country specific M&E email lists

  • PREVAL – The Programme for Strengthening the Regional Capacity for Monitoring and Evaluation of IFAD’s Rural Poverty-Alleviation Projects in Latin America and the Caribbean owner-preval@mail.rimisp.org 1,400+ members
  • AfrEA
    180+ Members, Archives: Public Information and networking tool of the African Evaluation Association (AfrEA) In conjunction with the national associations and committed donors, AfrEA has helped develop the concept of an African evaluation community. This listserv aims to build on this concept, to broaden this community, by further promoting the sharing of information, resources and opportunities. The AfrEA Community listserv serves as a moderated forum for a wide range of stakeholders, from evaluators who are actively …(more)
    • LateNote: This is a new list recently started at Yahoo! Groups to replace the old list at Topica. Moving the members from the old to the new list is a slow process. However, the old list is still active and has 460 subscribers. (message from Lise Kriel, 30/06/6)
  • indo-monev 440+ Members, Archives: Membership required. This is a mailing list to build a network of Indonesian. People anywhere in the world who are interested, dedicated, and professionalised to the work on monitoring and evaluation and other related development issues as well as development aid works, particularly in Indonesia. This network aims to more exchanging of information, more knowledge building and more awareness on the development monitoring and evaluation issue. Please join.
  • IAEVAL:  340+ members, Archives: Membership required The purpose of this listserv is to enhance communication among members of the US-based International NGO (INGO) community about program design, monitoring, evaluation and effectiveness. The target participants of IAEVAL are those of us who are directly or indirectly responsible for INGO M&E. We hope that this will serve to enhance the communication, shared learning and collaboration among us as persons responsible for evaluation in the US-based INGO community.
  • Relac: 480+ Miembros, Archivos: Se requiere suscripción Este es el grupo de discución de la red de evaluacion de America Latina y el Caribe.
  • REMAPP 150+ members, Archives: Membership required. REMAPP is a [UK-based] group of networking professionals concerned with planning, appraisal, monitoring, evaluation, research and policy issues in aid agencies.
  • MandENigeria
    90+ Members, Archives: Moderators only This listserve is for interested individuals and institutions to share knowledge, opportunities, experience and other resources in M & E. It is also an opportunity to access proffessional consultants in Monitoring and Evaluation in Nigeria and Africa. It is an informal medium to support capacity building, strengthening and dessemination of Monitoring and evaluation information in Nigeria under a Network of Monitoring and Evaluation.Evaluators are advised and encouraged to join and participate …(more)
  • IndiaEvalGroup
    30+ Members, Archives: Membership required. This discussion group consists of evaluators from India or evaluators working on Indian projects. The potential benefits of forming and participating with such a group are: 1. Fellowship with others working in a similar area 3. Encouraging sharing of learning across content and context areas
  • MONEV_NGO
    20+ Members, Archives: Membership required. Establish in Jakarta, Indonesia by 2004. It was started from a group of activists that concern about Monitoring and Evaluation skills that need to be developed by NGOs in Indonesia in general. This is an open forum, so please participate in sharing and discussing lessons learnt and experiences in Monitoring and Evaluation.
  • MandE Ghana 30+ Members, Archives: Membership required This email list has been established for people who have an interest in monitoring and evaluation as applied in Ghana. It is open to people living in Ghana and those elsewhere who have a connection with Ghana. Its aim is to: (a) encourage mutual learning between members, through exchange of questions and answers; (b) make opportunities quickly available to members, concerning M&E related training events, conferences, workshops and employment vacancies; (c) enable members to make contacts with other members with related M&E interests.
  • MandEPilipinas 9 Members, Archives: Membership required. This discussion group is meant for Monitoring and Evaluation professionals in the Philippines. It is a venue to network, exchange ideas and discuss new developments about the field with M&E practitioners in the country to promote mutual learning and intellectual growth.
  • EgDEvalNet < 5 Members, Archives: Membership required This discussion-group was established to discuss the evaluation of development activities in Egypt. This includes: Improving development evaluation activities Exchange of experience between evaluation practitioners Providing feedback for improving development planning Discuss the establishment of an Egyptian Development & Evaluation Association Define standards and guidelines for evaluation practice applicable for the Egyptian environment Develop development evaluation criteria and tools …(more)

How to set up an email list

  • Use Yahoo Groups, as used by many of the email lists shown above.
    • Go to http://groups.yahoo.com/
    • Sign up to get a Yahoo ID (you need to give yourself a username and password, once only).
    • Look for Create Your Own Group
      • Click on Start a Group Now, then follow the instructions
  • Or, use Dgroups
    • Go to http://www.dgroups.org/
      • Dgroups currently supports 1818 groups, containing 60690 members.
    • See if you can work out how to join, and set up a group. It is not easy