Value for money: A list

Hopefully, the start of a short but useful bibliography, listed in chronological order.

Please suggest additional documents by using the Comment facility below.  If you have ideas on how Value for Money can be clearly defined and usefully measured please also use the Comment facility below..

For the Editor’s own suggestion, go to the bottom of this page

2015

2014

2013

2012

2011

  • ICAI’s Approach to Effectiveness and Value for Money, November 2011. See also Rick Davies comments on same
  • Value for Money and international development: Deconstructing some myths to promote more constructive discussion. OECD Consultation Draft. October 2011
  • What does ‘value for money’ really mean? CAFOD, October 2011
  • Value for Money: Guideline, NZAID, updated July 2011
  • DFID’s Approach to Value for Money (VfM), July 2011
  • DFID Briefing Note: Indicators and VFM in Governance Programming July 2011.  INTRODUCTION: This note provides advice to DFID staff on: i. governance indicator best practice, and ii. measuring the Value for Money of governance programmes. This note is for use primarily by DFID governance advisers, as well as other DFID staff designing programmes with governance elements. The note provides a framework for consideration in Business Case design that relates to governance activity.  On Value for Money (VFM) in particular, this guidance is only intended as ‘interim’ whilst further research is undertaken. During 2011-2012, DFID will work to determine best practice and establish agreed approaches and mechanisms. This guidance will therefore be updated accordingly subject to research findings as they are made available.  This note was drawn up by DFID staff. It builds on 2 research reports by ITAD, submitted in December 2010 and January 2011 respectively, as well as DFID’s internal Business Case guidance. There are 2 main sections: Section 1: Governance Indicators and Section 2: Value for Money in Governance Programming. The note ends with 10 Top Tips on Business Case preparation.
  • DFID is developing ” Guidance for DFID country offices on maximising VfM in cash transfer programmes“. July 2011. Objective:To provide guidance to DFID country offices on measuring value for money in cash transfer programmes through the rigorous analysis of costs and benefits, as far as possible, at the design stage and through programme implementation and completion.  This project is driven by DFID’s expansion of support to cash transfer programmes, its strong emphasis on ensuring programmes are delivering value for money, and strong country office demand for specific advice and guidance” (ToRs)
  • Value for Money: Current Approaches and Evolving Debates. Antinoja Emmi, Eskiocak Ozlem, Kjennerud Maja, Rozenkopf Ilan,  Schatz Florian, LSE, London, May 2011. 43 pages. “NGOs have increasingly been asked by donors to demonstrate their Value for Money (VfM).This report analyses this demand across a number of dimensions and intends to lay out the interpretation of different stakeholders. After contextualising the debate internationally and nationally, a conceptual discussion of possible ways of defining and measuring VfM is conducted, followed by a technical analysis of different approaches and measurement techniques adopted by stakeholders. Finally, opportunities and caveats of measuring VfM are discussed. The report draws heavily on information gained through a total of seventeen interviews with representatives of NGOs, consultancies, think tanks and academic institutions.”
  • Independent Commission for Aid Impact – Work Plan, May 2011: “We have not yet agreed our own definition of terms such as “value for money” and “aid effectiveness”. These are complex issues which are currently under much debate. In the case of value for money we believe that this should include long-term impact and effectiveness. We intend to commission our contractor to help us in our consideration of these matters.”
  • The Guardian, Madeleine Bunting,11th April 2011 “Value for money is not compatible with increasing aid to ‘fragile states’. The two big ideas from the UK’s Department for International Development are destined for collision”
  • NAO report on DFID Financial Management, April 2011. See the concluding section of the Executive Summary, titled Conclusion on value for money:
    • “We recognise that the Department has been improving its core financial management and has also been strengthening its focus on value for money at all levels of the organisation, including through a step change in its approach to the strategic allocation of resources based on expected results. Important building blocks have been put in place, but key gaps in financial management maturity remain. The changes the Department has introduced to-date are positive, and provide a platform to address the challenges that will come with its increased spending.”
    • At present, however, the Department’s financial management is not mature. The Department’s forecasting remains inaccurate and its risk management is not yet fully embedded. Weaknesses in the measurement of value for money at project level, variability in the quality and coverage of data, and lack of integration in core systems, mean that the Department cannot assess important aspects of value for money of the aid it has delivered, at an aggregated level. The Department now needs to develop a coherent single strategy to address the weaknesses identified and the key risks to meeting its objectives.”
  • DFID’s March 2011, Multilateral Aid Review, “was commissioned to assess the value for money for UK aid of funding through multilateral organisations”. “All were assessed against the same set of criteria, interpreted flexibly to fit with their different circumstances, but always grounded in the best available evidence. Together the criteria capture the value for money for UK aid of the whole of each organisation. The methodology was independently validated and quality assured by two of the UK’s leading development experts. The assessment framework included criteria which relate directly to the focus and impact of an organisation on the UK’s development and humanitarian objectives– such as whether or not they are playing a critical role in line with their mandate, what this means in terms of results achieved on the ground, their focus on girls and women, their ability to work in fragile states, their attention to climate change and environmental sustainability, and their focus on poor countries. These criteria were grouped together into an index called “Contribution to UK development objectives.  The framework also included criteria which relate to the organisations’ behaviours and values that will drive the very best performance – such as transparency, whether or not cost and value consciousness and ambition for results are driving forces in the organisation, whether there are sound management and accountability systems, whether the organisations work well in partnership with others and whether or not financial resource management systems and instruments help to maximise impact. These were grouped together into an index called “Organisational strengths”. Value for money for UK aid was assessed on the basis of performance against both indices. So, for example, organisations with a strong overall performance against both indices were judged to offer very good value for money for UK aid, while those with a weak or unsatisfactory performance against both indices were deemed to offer poor value for money.”
    • [RD comment] In the methodology chapter the authors explain / claim that this approach is based on a 3E view that seeks to give attention to the whole “value for money chain” (nee causal chain), from inputs to impacts (which is discussed below). Reading the rest of that chapter, I am not convinced, I think the connection is tenuous, and what exists here is a new interpretation of Value for Money that will not be widely used. That said, I dont envy the task the authors of this report were faced with.
    • [RD comment]The Bilateral Aid Review makes copious references to Value for Money, but there is no substantive discussion of what it means anywhere in the review. Annex D includes a proposal format which includes a section for providing  Value for Money information in 200 words. This includes the following fields, which are presumably explained elsewhere: Qualitative judgement of vfm, vfm metrics (including cost-benefit measures), Unit costs, Scalability, Comparators, Overall VfM RAG rating: red/amber/green.
  • Aid effectiveness and value for money aid: complementary or divergent agendas as we head towards HLF-4. (March 2011)  This ODI, ActionAid and UK Aid Network public event was called “to reflect on approaches dominating the debate in advance of the OECD’s 4th High Level Forum on Aid Effectiveness (HLF-4); explore the degree to which they represent complimentary or divergent agendas; and discuss how they might combine to help ensure that HLF-4 is a turning point in the future impact of aid.” The presentations of three of the four speakers are available on this site. Unfortunately DFID’s presentation, by Liz Ditchburn– Director, Value for Money, DFID, is not available.
  • BOND Value for Money event (3 February 2011). “Bond hosted a half day workshop to explore this issue in more depth. This was an opportunity to take stock of the debates on Value for Money in the sector, to hear from organisations that have trialled approaches to Value for Money and to learn more about DFID’s interpretation of Value for Money from both technical and policy perspectives.” Presentations were made by (and are available): Oxfam, VSO, WaterAid, HIV/AIDS Aliliance, and DFID (Jo Abbot, Deputy Head Civil Society Department). There was also a prior BOND event in January 2011 on Value for Money, and presentations are also available, including an undated National Audit Office Analytical framework for assessing Value for Money
    • [RD Comment]The DFID presentation on “Value for Money and Civil Society”  is notable in the ways that it seeks to discourage NGOs from over investing efforts to measure Value for Money, and its emphasises on the continuity of DFIDs approach to assessing CSO proposals. The explanation of Value for Money is brief, captured in two statements: “optimal use of resources to get desired outcomes” and “maximum benefit for the resources requested”. To me this reads as efficiency and cost-effectiveness.
  • The Independent Commission for Aid Impact (ICAI)’s January 2011online consultation contrasts Value for Money reviews with Evaluations, Reviews and Investigations, as follows.
    • Value for money reviews: judgements on whether value for money has been secured in the area under examination. Value for money reviews will focus on the use of resources for development interventions.
    • Evaluations: the systematic and objective assessment of an on-going or complete development intervention, its design, implementation and results. Evaluations will focus on the outcome of development interventions.
    • Reviews: assessments of the performance of an intervention, periodically or on an ad hoc basis. Reviews tend to look at operational aspects and focus on the effectiveness of the processes used for development interventions.
    • Investigations:a formal inquiry focusing on issues around fraud and corruption.
      • [RD comment] The ICAI seems to take a narrower view than the National Audit Office, focusing on economy and efficiency and leaving out effectiveness – which within its perspective would be covered by evaluations.

 

2010

  • Measuring the Impact and Value for Money of Governance & Conflict Programmes Final Report December 2010 by Chris Barnett, Julian Barr, Angela Christie,  Belinda Duff, and Shaun Hext. “The specific objective stated for our work on value for money (VFM) in the Terms of Reference was: “To set out how value for money can best be measured in governance and conflict programming, and whether the suggested indicators have a role in this or not”. This objective was taken to involve three core tasks: first, developing a value for money approach that applies to both the full spectrum of governance programmes, and those programmes undertaken in conflict-affected and failed or failing states; second, that the role of a set of suggested indicators should be explored and examined for their utility in this approach, and, further, that existing value for money frameworks (such as the National Audit Office’s use of the 3Es of ‘economy, efficiency and effectiveness’) should be incorporated, as outlined in the Terms of Reference.”
  • Value for Money: How are other donors approaching ‘value for money’ in their aid programming? Question and answer on the Governance and Social Development Resource Centre Help Desk, 17 September 2010.
  • Value for Money (VfM) in International Development NEF Consulting Discussion Paper, September 2010. Some selective quotes: “While the HM Treasury Guidance provides principles for VfM assessments, there is currently limited guidance on how to operationalise these in the international development sector or public sector more generally. This has led to confusion about how VfM assessments should be carried out and seen the proliferation of a number of different approaches.” …”The HM Treasury guidance should inform the VfM framework of any publicly-funded NGO in the development sector. The dark blue arrow in Figure 1 shows the key relationship that needs to be assessed to determine VfM. In short, this defines VfM as: VfM = value of positive + negative outcomes / investment (or cost)”
  • [RD Comment:] Well now, having that formula makes it so much easier (not), all we have to do is find the top values, add them up, then divide by the bottom value :-(
  • What is Value for Money? (July 2010) by the Improvement Network (Audit Commission, Chartered Institute of Public Finance and Accountancy (CIPFA), Improvement and Development Agency (IDeA), Leadership Centre for Local Government, NHS Institute for Innovation and Improvement).  “VfM is about achieving the right local balance between economy, efficiency and effectiveness, the 3Es – spending less, spending well and spending wisely” These three attributes are each related to different stages of aid delivery, from inputs to outcomes, via this diagram.
  • [RD comment]: Reading this useful page raises two interesting questions. Firstly, how does this framework relate to the OECD/DAC evaluation criteria? Is it displacing them, as far as DFID is concerned? It appears so, given its appearance in the Terms of Reference for the contractors who will do the evaluation work for the new Independent Commission for Aid Impact. Ironically, the Improvement Network makes the following comments about the third E, (effectiveness) which suggests that the DAC criteria may be re-emerging within this new framework: “Outcomes should be equitable across communities, so effectiveness measures should include aspects of equity, as well as quality. Sustainability is also an increasingly important aspect of effectiveness.” The second interesting question is how Value for Money is measured in aggregate, taking into account all three Es. Part of the challenge is with effectiveness, where it is noted that effectivenessis a measure of the impact that has been achieved, which can be either quantitative or qualitative.” Then there is the notion that Value for Money is about a “balance” of the three Es. “VfM is high when there is an optimum balance between all three elements – when costs are relatively low, productivity is high and successful outcomes have been achieved.” On the route to that heaven there are multiple possible combinations of states of economy (+,-), efficiency (+,-) and effectiveness (+,-). There is no one desired route or ranking. Because of these difficulties Sod’s Law will probably apply and attention will focus on what is easiest to measure i.e. economy or at the most, efficiency. This approach seems to be evident in earlier government statements about DFID: “International Development Minister Gareth Thomas yesterday called for a push on value for money in the UN system with a target of 25% efficiency savings.”….”The UK is holding to its aid commitments of 0.7 % of GNI.  But for the past five years we have been expected to cut 5% from our administration or staffing costs across Government. 5% – year on year”

 

2007

 

2003

 

The Editor’s suggestion

1. Dont seek to create an absolute measure of the Value for Money for a single activity/project/program/intervention

2. Instead, create a relative measure of  the VfM found within a portfolio of activities, by using a rank correlation. [This measure then be used to compare VfM across different types of portfolios]

  • 1. Rank the entities (activities/projects…) by cost of the inputs, and
    • Be transparent about which costs were included/excluded e.g partner’s own costs, other donor contributions etc,)
  • 2. Rank the the same set of entities by their perceived effectiveness or impact (depending on the time span of interest)
    • Ideally this ranking would be done through a participatory ranking process (see Refs below), and information would be available on the stakeholders who were involved
    • Where multiple stakeholder groups were consulted, any aggregation of their rankings would be done using transparent weighting values and information would also be available on the Standard Deviation of the rankings given to the different entities. There is likely to be more agreement across stakeholders on some rankings than others.
    • Supplementary information would be available detailing how stakeholders explained their ranking. This is best elicited through pair comparisons of  adjacent sets of ranked entities.
      • That explanation is likely to include a mix of:
        • some kinds of impacts being more valued by the stakeholders than others, and
        • for a given type of impact there being evidence of more rather than less of that kind of impact, and
        • where a given impact is on the same scale, there being better evidence of that impact
  • 3. Calculate the rank correlation between the two sets of rankings. The results will range between these two extremities:
    • A high positive correlation (e.g. +0.90): here the highest impact is associated with the highest cost ranking, and the lowest impact is associated with the lowest cost ranking. Results are proportionate to investments. This would be the more preferred finding, compared to
    • A high negative correlation (e.g -0.90): here the highest impact is associated with lowest cost ranking, but the lowest impact is associated with the highest cost ranking. Here the more you increase your investment the less you gain, This is the worst possible outcome.
    • In between will be correlations closer to zero, where there is no evident relationship between cost and impact ranking.
  • 4. Opportunities for improvement would be found by doing case studies of “outliers”, found when the two rankings are plotted against each other in a graph. Specifically:
    • Positive cases, whose rank position on cost is conspicuosly lower than their rank position on impact.
    • Negative cases, whose rank position on impact is conspicuosly lower than their rank position on cost.

PS: It would be important to  disclose the number of entities that have been ranked. The more entities there are being ranked the more precise the rank correlation will be. However, the more entities there are to rank the harder it will be for participants and the more likely they will use tied ranks. A minimum of seven rankable entities would seem desirable.

For more on participatory ranking methods see:

PS: There is a UNISTAT plugin for Excel that will produce rank correlations, plus much more.

Updated MSC bibliography

PLEASE NOTE. The bibliography below has now been superseded by a more comprehensive bibliography here. This now includes pdf copies of many of the papers plus a search facility. It will continue to be updated

This (now older) page is intended to provide  an update of the bibliography in the 2005 Most Significant Change technique (MSC) Users Guide

Please feel free to suggest additions to this list, through the Comment facility below, or by emailing the editor (Rick Davies)

Papers

 

Powerpoints

  • Seven sets of slides, used for 2 day MSC training in Delhi, 2008 by Rick Davies . Available on request , on condition of willingness to share any adaptations made

YouTube video

Other

 

Guidance on Terms of Reference for an Evaluation: A List

This is the beginning of a new page that will list various sources of guidance on the development of Terms of Reference for an evaluation.

If you have suggestions for any additions (or edits) to this list please use the Comment function below.

Please also see the hundreds of examples of actual ToRs (and related docs) in the MandE NEWS Jobs Forum

PS: Jim Rugh has advised me (5 June 2010) that “two colleagues at the Evaluation Center at Western Michigan University are undertaking an extensive review of RFPs / ToRs they’ve seen posted on various listservs; they intend to publish a synthesis, critique and recommendations for criteria to make them more realistic and appropriate.

Card sorting methods: A List

Card / pile sorting is a simple and useful means of eliciting and aggregating qualitative data, in a participatory manner. In anthropology, it is described as pile sorting, and is used for domain analysis, in the field of cognitive anthropology. In website design it is known as card sorting.

Anthropology
Website design
Software
  • OptimalSort: Online card sorting software (free and paid for use):
  • SynCapsV2:  For the analysis of the results of physical card sorts, which can be downloaded and used on a desktop/laptop
  • XSort: is a free card sorting application for Mac, aimed at user experience professionals and social scientists.
  • KardSort : Perform Web-Based Cardsort Study for free.
  • Miro Card Sorting template

 

Identifiying and documenting “Lessons Learned”: A list of references

Editor’s note:

This is a very provisional list of documents on the subject of Lessons Learned, what they are, and how to identify and document them. If you have other documents that you think should be included in this list, please make a comment below.

Note: This is not a list of references on the wider topic of learning, or on the contents of the Lessons Learned.

2014

  • EVALUATION LESSONS LEARNED AND EMERGING GOOD PRACTICES. ILO Guidance Note No.3, April 2014. April 25, 2014 “The purpose of this guidance note is to provide background on definitions and usages of lessons learned applied by the ILO Evaluation Unit. Intended users of this guidance note are evaluation managers and any staff in project design or technically backstopping the evaluation process. There is separate guidance provided for  consultants on how to identify, formulate and present these findings in reports”

2012

2011

  • The NATO Lessons Learned Handbook. Second Edition, September 2011 “Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.” – Douglas Adams, author of The Hitchhiker’s Guide to the Galaxy”

2009

2007

  • Lessons Learned from Evaluation M. J. Spilsbury, C. Perch, S. Norgbey, G. Rauniyar and C.Battaglino Special Study Paper Number 2 A Platform for Sharing Knowledge. United Nations Environment Programme. January,2007. Lessons presented in evaluation reports are often of highly variable quality and limited utility. They are “often platitudes borne of a felt need to demonstrate engagement in the ‘knowledge society’ or simply to satisfy the specified evaluation requirements”. Even where high quality lessons are developed, they are seldom communicated effectively to their intended audiences. In order to enhance the quality of lessons, improve their utilisation, and aid their dissemination and communication, a Framework of Lessons from evaluation is presented in this paper. The framework consists of common problems, issues and or constraints to which evaluation lessons relate using ‘Mind- mapping’ software and ‘problem tree’ techniques. Evaluation lessons were systematically classified within the resulting Framework of Lessons. The proposed framework of evaluation lessons is best used within the context of interactive ‘face-to-face’ communication with project / programme managers to ensure that evaluation lessons truly become ‘lessons learned’.

2005

2004

  • Criteria for Lessons Learned (LL) A Presentation for the 4th Annual CMMI Technology Conference and User Group , by  Thomas R. Cowles Raytheon Space and Airborne Systems Tuesday, November 16, 2004

2001

  • M. Q. Patton (2001) Evaluation, Knowledge Management, Best Practices, and High Quality Lessons Learned American Journal of Evaluation, 22(3), 2001. Abstract:  Discusses lessons to be learned from evaluation and best practices in evaluation and some ways to bring increased rigor to evaluators’ use of those terms. Suggests that “best” practices is a term to avoid, with “better” or “effective” being more realistic, and calls for more specificity when discussing lessons to be derived. (full text not yet found on line)

1997

If you know of other relevant documents and web pages, please tell us, by using the Comment facility below

MandE NEWS email List

If you want to talk with others about monitoring and evaluation then join the M&E NEWS email list. (< click this link). It has more than 2200+ members worldwide, and is growing in size every day. You can access monthly summaries of the 2007 and 2008  postings here.

A list of M&E email lists

Please Note:

  • If you want to add a new M&E email list, please use the Comment facility at the bottom of this page.
  • If you want to join any of these email lists, clock on the link for that list (Don’t use the Comment facility)

General purpose lists

  • MandENEWS
    2600+ Members, Archives: Membership required. The MandE NEWS mailing list is part of the MandE NEWS website at www.mande.co.uk . Visitors to the website are invited to use the mailing list to exchange information with each other about monitoring and evaluation issues, especially as they relate to international development aid. The Editor of MandE NEWS will also use the mailing list to inform list members about significant updates to the MandE NEWS website. The MandE NEWS mailing list is managed by the Editor, Rick Davies (contact email …(more)
  • Eval-Net [link not working] 858 members as of 2006 (please help update this number) Knowledge sharing and learning continue to be a top corporate priority for UNDP. The purpose of the Evaluation Network is to strengthen UNDP’s evaluation knowledge base by disseminating good practices and lessons learned on monitoring and evaluation to a broad constituency and to foster results-based performance at both country and corporate levels. It will also help build UNDP staff capacity in measuring and assessing results. This network specifically aims to: Share and exchange experiences and knowledge and lessons distilled from evaluative work relating to programmes and projects; Mainstream results orientation within the work of UNDP’s six practice areas; Provide a forum for UNDP staff to share and deepen their knowledge of monitoring and evaluation practices and methodologies. This network is open to all UNDP staff members interested in and working on measuring and assessing results and who want to contribute and build their capacity in this area. (posted 16/12/06)
  • XCeval
    880+ Members, Archives: Membership required XCeval is a listserv for persons interested in issues associated with international and cross-cultural evaluation. Initially set up for the International and Cross-Cultural Topical Interest Group of the American Evaluation Association. Many of the postings (average 34/month) are announcements of short-term consultancies or full-time positions in international M&E-related jobs. Also exchanges of ideas of current interest to persons involved in the evaluation of international development. (updated 15/12/06)
  • American Evaluation Association Electronic Lists
    • EVALTALK Listserv 3100+ members An open, unmoderated list for general discussion of evaluation and associated issues sponsored by the American Evaluation Association. To subscribe, send mail to LISTSERV@BAMA.UA.EDU with the command (paste it!): SUBSCRIBE EVALTALK
    • EVALINFO Sponsored by American Evaluation Association (AEA) as the official electronic network for distribution of information to organization members and interested parties. Anyone can subscribe and receive mailings but non-AEA members cannot post to the list. To subscribe, send an e-mail to LISTSERVE@UA1VM.UA.EDU with the message: SUBSCRIBE EVALINFO <Firstname> <Lastname>

Email lists focusing on specific evaluation themes, issues, or methods

  • AIMEnet Listserv 1000+ members, Archives, Membership required In 2004, MEASURE Evaluation teamed with the U.S. President’s Emergency Plan for AIDS Relief, USAID, CDC, UNAIDS, the World Health Organization, The Global Fund to Fight AIDS, Tuberculosis and Malaria, the World Bank Group, and UNICEF to create the HIV/AIDS Monitoring and Evaluation Network (AIMEnet) listserv. AIMEnet was initially created so we could stay in touch with participants from several Expanded HIV/AIDS Response M&E workshops. Today, the AIMEnet listserv has been broadened to include anyone interested in sharing technical experiences, tools and information in monitoring and evaluation (M&E) of HIV/AIDS programs around the world.
  • Most Significant Changes” technique. 1100+ members. Archives, Membership required. This is moderated by Rick Davies and Jessica Dart (Melbourne). This egroup was formed to promote discussion about the use of an innovative method of monitoring, called the “Most Significant Changes” approach. This is a non-indicator based approach to monitoring, making use of a diversity of narrative accounts of change which are subject to an iterated, open and accountable selection process. It has already been applied in developed and less developed economies, in partipatory rural development projects, agricultural extension projects, educational settings and mainstream human services delivery. Through discussion we hope to promote the wider use of the method, and further innovation and development in the method itself. Most Significant Changes monitoring is different from common monitoring practice in at least four respects: (a) The focus is on the unexpected, (b) Information about those events is documented using text rather than numbers, (c) Analysis of that information is through the use of explicit value judgements, (d) Aggregation of information and analysis takes place through a structured social process. This egroup will act both as a repository of information about people’s experiences with the MSC method to date, and as a nursery for ideas of how to take the method further- into new environments, where there are new opportunities and constraints.
  • Outcome Mapping Learning Community 700+ members globally, as of 2008. Public. Outcome Mapping is an innovative approach to project and programme planning, monitoring and evaluation with a strong focus on participatory learning. The major innovation is the emphasis on the behaviour change of key actors with whom the programme has an influence, rather than focussing on changes in state that may or may not be attributed to the programme. The community was set up to support users of the methodology and those interested in the concepts behind it. Come and discuss the theory, get advice on applying OM in your case and meet others interested in this approach to P,M&E. See the community brochure for more information or contact Simon Hearn.

  • Systems in Evaluation Discussion List
    290+ members, Archives. EVAL-SYS@LISTS.EVALUATION.WMICH.EDU

  • Theory-Based_Evaluation
    390+ Members, Archives: Public Welcome to Theory-Based Evaluation dicsussion list! In the context where evaluation is challenged by attribution, complex systems and the need for evidences based policies, theory-based evaluation is seen as an effective response to these challenges. The purpose of this list, is to provide a forum where practitioners and scholars can exchange and share ideas, lessons and methods associated with theory-based evaluation. Hence, this discussion list is dedicated to the evaluation of Institutional …(more)
  • Pelican Initiative: Platform for Evidence-based Learning & Communications for Social Change 700+ Members, Archives. Membership required. This platform seeks to bring together development practitioners from different disciplines, specialists and policy makers to explore this question, share experiences, and to push the agenda further on three themes: * Evidence and learning for policy change; * Learning in organisations and among partners; * Society-wide learning among a multitude of stakeholders.
  • LEAP IMPACT 160+ members, Archives. Membership required Leap Impact aims to improve the institutional performance of monitoring and evaluation practice related to information services, information products and information projects. It is a community of practice open to all individuals/organisations interested in the evaluation of information. LEAP IMPACT is a joint initiative of CTA, IICD, Bellanet, and KIT. It is facilitated by Sarah Cummings (KIT ILS), Neil Pakenham-Walsh (HIF-net-at-WHO) and Shaddy Shadrach (Oneworld South Asia).
  • NetworkEvaluation
    280+ Members, Archives: Membership required.The Network Evaluation mailing is an extension of the Networks section of Monitoring and Evaluation NEWS at www.mande.co.uk The focus of the Network Evaluation mailing list is on the exchange of information about: Methodologies for, and experiences of, the evaluation of networks. Including Networks of individuals, groups and organisations, Both face to face and electronically mediated networks The use of social network analysis in international development aid projects In planning, …(more)
  • PARTICIPATORY MONITORING AND LEARNING
    60+ Members, Archives: Membership required.This group on Participatory Monitoring and Learning (PM&L) has been created to facilitate interaction amongst a group of researchers, practitioners and others interested in the topic of participatory approaches to monitoring, evaluation and learning.
  • ODAfrica
    50+ Members, Archives: Public. Support group for OD Practitioners working for and in Africa. Initiative of OD Practitioners from Tanzania, Uganda, Ghana, South Africa, Angola, Zimbabwe and Zambia who attended a two-year OD Practitioners Formation Programme in 2004/2005.
  • Evaluation Feedback 30+ members, Archives. Membership required. This was moderated by Catherine Cameron, author of Evaluation feedback for effective learning and accountability,
  • EGAD List: Program evaluation, statistics and methodology list 170+ members. To send a message to all the people currently subscribed to the list, just send mail to EGAD@LISTSERV.ARIZONA.EDU. This is called “sending mail to the list”, because you send mail to a single address and LISTSERV makes copies for all the people who have subscribed. This address (egad@listserv.arizona.edu) is also called the “list address”. You must never try to send any command to that address, as it would be distributed to all the people who have subscribed. All commands must be sent to the “LISTSERV address”, listserv@listserv.arizona.edu.
  • Arlist: Action research mailing list.Arlist-L is a medium-volume, multidisciplinary electronic mailing list. It is a moderated forum for the discussion of the theory and practice of action research and related methods. Bibliography of over 50 references on meta-evaluation. References include discussions of technical competence of individual evaluations, critical analyses of evaluations of the impact of evaluations on the less powerful groups, managerial meta-evaluations on the perceived credibility and utility of the evaluation. To subscribe, send an e-mail (no subject) to request@psy.uq.oz.au with the message: SUBSCRIBE ARLIST <Firstname> <Lastname> Or, To subscribe to arlist-L point your browser at http://lists.scu.edu.au/mailman/listinfo/arlist-l

  • EDIAIS Forum (Enterprise Development Impact Assessment Information Service) 160+ members Joining the list: email info@enterprise-impact.org.uk You will then receive an e-mail asking you to confirm your subscription. Once you are a member: You will receive all messages sent to the list. To send a message to the list mail it to: ENT-IMP@enterprise-impact.org.uk – use either Reply to respond to the last contributor only or Reply All and your message will automatically be mailed to all list members.

Country specific M&E email lists

  • PREVAL – The Programme for Strengthening the Regional Capacity for Monitoring and Evaluation of IFAD’s Rural Poverty-Alleviation Projects in Latin America and the Caribbean owner-preval@mail.rimisp.org 1,400+ members
  • AfrEA
    180+ Members, Archives: Public Information and networking tool of the African Evaluation Association (AfrEA) In conjunction with the national associations and committed donors, AfrEA has helped develop the concept of an African evaluation community. This listserv aims to build on this concept, to broaden this community, by further promoting the sharing of information, resources and opportunities. The AfrEA Community listserv serves as a moderated forum for a wide range of stakeholders, from evaluators who are actively …(more)
    • LateNote: This is a new list recently started at Yahoo! Groups to replace the old list at Topica. Moving the members from the old to the new list is a slow process. However, the old list is still active and has 460 subscribers. (message from Lise Kriel, 30/06/6)
  • indo-monev 440+ Members, Archives: Membership required. This is a mailing list to build a network of Indonesian. People anywhere in the world who are interested, dedicated, and professionalised to the work on monitoring and evaluation and other related development issues as well as development aid works, particularly in Indonesia. This network aims to more exchanging of information, more knowledge building and more awareness on the development monitoring and evaluation issue. Please join.
  • IAEVAL:  340+ members, Archives: Membership required The purpose of this listserv is to enhance communication among members of the US-based International NGO (INGO) community about program design, monitoring, evaluation and effectiveness. The target participants of IAEVAL are those of us who are directly or indirectly responsible for INGO M&E. We hope that this will serve to enhance the communication, shared learning and collaboration among us as persons responsible for evaluation in the US-based INGO community.
  • Relac: 480+ Miembros, Archivos: Se requiere suscripción Este es el grupo de discución de la red de evaluacion de America Latina y el Caribe.
  • REMAPP 150+ members, Archives: Membership required. REMAPP is a [UK-based] group of networking professionals concerned with planning, appraisal, monitoring, evaluation, research and policy issues in aid agencies.
  • MandENigeria
    90+ Members, Archives: Moderators only This listserve is for interested individuals and institutions to share knowledge, opportunities, experience and other resources in M & E. It is also an opportunity to access proffessional consultants in Monitoring and Evaluation in Nigeria and Africa. It is an informal medium to support capacity building, strengthening and dessemination of Monitoring and evaluation information in Nigeria under a Network of Monitoring and Evaluation.Evaluators are advised and encouraged to join and participate …(more)
  • IndiaEvalGroup
    30+ Members, Archives: Membership required. This discussion group consists of evaluators from India or evaluators working on Indian projects. The potential benefits of forming and participating with such a group are: 1. Fellowship with others working in a similar area 3. Encouraging sharing of learning across content and context areas
  • MONEV_NGO
    20+ Members, Archives: Membership required. Establish in Jakarta, Indonesia by 2004. It was started from a group of activists that concern about Monitoring and Evaluation skills that need to be developed by NGOs in Indonesia in general. This is an open forum, so please participate in sharing and discussing lessons learnt and experiences in Monitoring and Evaluation.
  • MandE Ghana 30+ Members, Archives: Membership required This email list has been established for people who have an interest in monitoring and evaluation as applied in Ghana. It is open to people living in Ghana and those elsewhere who have a connection with Ghana. Its aim is to: (a) encourage mutual learning between members, through exchange of questions and answers; (b) make opportunities quickly available to members, concerning M&E related training events, conferences, workshops and employment vacancies; (c) enable members to make contacts with other members with related M&E interests.
  • MandEPilipinas 9 Members, Archives: Membership required. This discussion group is meant for Monitoring and Evaluation professionals in the Philippines. It is a venue to network, exchange ideas and discuss new developments about the field with M&E practitioners in the country to promote mutual learning and intellectual growth.
  • EgDEvalNet < 5 Members, Archives: Membership required This discussion-group was established to discuss the evaluation of development activities in Egypt. This includes: Improving development evaluation activities Exchange of experience between evaluation practitioners Providing feedback for improving development planning Discuss the establishment of an Egyptian Development & Evaluation Association Define standards and guidelines for evaluation practice applicable for the Egyptian environment Develop development evaluation criteria and tools …(more)

How to set up an email list

  • Use Yahoo Groups, as used by many of the email lists shown above.
    • Go to http://groups.yahoo.com/
    • Sign up to get a Yahoo ID (you need to give yourself a username and password, once only).
    • Look for Create Your Own Group
      • Click on Start a Group Now, then follow the instructions
  • Or, use Dgroups
    • Go to http://www.dgroups.org/
      • Dgroups currently supports 1818 groups, containing 60690 members.
    • See if you can work out how to join, and set up a group. It is not easy

Invitation to join a dedicated discussion forum on reconstructing baseline data

From: XCeval@yahoogroups.com On Behalf Of Jim Rugh
Sent: April 28, 2008 5:47 PM
To: XCEval listserv; MandE NEWS
Subject: [XCeval] Invitation to join a dedicated discussion forum on reconstructing baseline data

We realize that any evaluation that purports to be an “impact evaluation” needs to compare “before-and-after” (pre-test + post-test data) and “with-and-without” (the counterfactual – what would have happened without the intervention being evaluated). Yet in our experience the majority of evaluations conducted of development projects and programs do not have comparable baseline data, nor appropriate comparison (much less “control”) groups. Although the discussion of counterfactuals and pre-test + post-test comparisons frequently focuses on quantitative evaluations designs, the need to understand baseline conditions is equally important for qualitative evaluations. What can be done to strengthen evaluations in such cases? In other words, what can be done to reconstruct baseline and counterfactual data?

We (Jim Rugh and Michael Bamberger) are planning a follow-up volume to “RealWorld Evaluation: Working under budget, time, data and political constraints” (Sage Publications 2006). (More information can be found at www.RealWorldEvaluation.org.)
Continue reading “Invitation to join a dedicated discussion forum on reconstructing baseline data”

Impact Assessment: Training to be provided by INTRAC

Date: 07 May 2008 – 09 May 2008
Venue: London, UK

Training event organised by INTRAC

Course fee: £475.00
Number of days: 3
Description:
With increased pressures on delivery and accountability, the need has never been greater for Civil Society and other development organisations to assess the long-term impact of their work. In three fruitful days you will explore the current state of the debate about impact assessment as well as reviewing current practice and methodologies. Learn to assess the effectiveness of your work.Course objectives
• Understand what is meant by impact assessment and how the concept has emerged
• Explore the relationship between impact assessment and other forms of evaluative activity
• Explore different approaches and alternative methodologies in conducting impact assessment
• Identify ways of getting a representative picture e.g. case studies, sampling methods, and triangulation between quantitative and qualitative data
• Consider impact assessment in different contexts e.g. in programmes and projects, organisationally, and in advocacy work

Link to application form

Regulatory Impact Analysis (RIA) Training Course

Date: 6-10 October
Venue: College of Europe, Bruges Campus, Belgium

Dear Colleague,

The College of Europe and Jacobs and Associates Europe invite you to participate in our 5-Day Regulatory Impact Analysis (RIA) Training Course on the principles, procedures, and methods of RIA. This practical, hands-on, course was given in March, and due to demand, will be offered two more times in 2008 — in June and October. The course, by the most experienced public policy and RIA trainers in Europe, is expressly designed for policy officials and executives who use RIA to improve policy results.

The course will benefit any official using RIA in environmental, social and economic fields as well as stakeholders such as business associations, NGOs and consultants who want to understand better how to use RIA constructively. The course is open for subscription worldwide and is presented in the historic city of Bruges, Belgium. A discount is offered for early registration.

Information on RIA Training Course

2008 DATES: 23-27 June and 6-10 October (each course is 5 full days)
LOCATION: College of Europe, Bruges Campus, Belgium
REGISTRATION : For more information and application form go to www.coleurope.eu/ria2008
COST:

  • €2,995 for early registration (includes housing and meals)
  • €3,495 for regular registration (includes housing and meals)

REGISTRATION DEADLINES:

Early registration for the June course runs until 11 May 2008.
Registration closes 1 June 2008.

Early registration for the October course runs until 10 August 2008.
Registration closes on 14 September 2008.

OPEN : World-wide (only 40 seats available per session)
LANGUAGE OF INSTRUCTION: English
COURSE OFFERED BY: College of Europe and Jacobs and Associates Europe

The College of Europe provides a wide range of professional training courses, workshops and tailor-made seminars on the European Union in general or on targeted issues. For more information, please visit:
www.coleurope.eu/training or contact Mrs. Annelies Deckmyn by email: adeckmyn@coleurop.be

Jacobs and Associates continues to offer its tailored RIA training courses on-site around the world, adapted to the client’s needs. To discuss an on-site RIA course, contact ria@regulatoryreform.com. For information on the full range of regulatory reform work by Jacobs and Associates, see http://www.regulatoryreform.com/.

Best wishes,
Marc
Scott Jacobs
Managing Director, Jacobs and Associates Europe

%d bloggers like this: