Conference about “The Future of Evaluation in Modern Societies”, Germany.

“The Center for Evaluation (CEval) of Saarland University, Germany, is a globally active research institute for applied social science in the field of evaluation and member of the DeGEval (German Evaluation Society). On this occasion, we organize an international conference about “The Future of Evaluation in Modern Societies” on 14th and 15th June 2012 in Saarbruecken, Germany.

The objective of this event is to discuss the role of evaluation in societies comprehensively and on an international comparison for bringing different discussion strands together into a joint debate. For keynote speeches and lectures, we could already win numerous renowned scientists from the USA, Latin America, Africa and Europe.

Please find the detailed program and registration form on our homepage: http://futureofevaluation.ceval.de

You also find a review about our recent book “A Practioner Handbook on Evaluation” which will appeal to evaluation practitioners, policy-makers who conduct evaluations in their daily work, students training in applied research and organizations which are implementing projects and programs that could be the subject of an evaluation.

—————————————

Maria Albrecht,  M.A., Center for Evaluation (CEval), Saarland University, P.O. Box 15 11 50, 66041 Saarbrücken – Germany, Fon: +49 (0)681 302-3561, Fax: +49 (0)681 302-3899, www.ceval.de

UKES CONFERENCE 2012 Evaluation for results: What counts? Who stands to gain? How is it done?

16 March 2012
The Macdonald Hotel, Birmingham

[from UKES website] UKES conferences address leading issues of the day in programme and policy evaluation. The 2012 Annual Conference will address the current drive towards evaluation focused on results – frequently linked to ‘Payment by Results’ and what, in international development and elsewhere, is familiar as ‘Results-Based Management’.

Evaluators and those who commission evaluation who advocate a focus on results reflect a legitimate concern with the productivity and efficiency of programmes and the capacity of interventions to secure gains and improvements in practice and provision. They point out that programmes should be held to account to accomplish what they were designed to do and paid for, often out of public funds. A primary focus on results seeks to emphasise main effects and outcomes that have been valued and agreed. In times of austerity and unusually scarce resources, proponents of a strong focus on results argue that emphasising value for money is socially responsible.

Others argue that an over-emphasis on measuring a programme’s results neglects important questions of how results are generated in a context, whether results capture the real quality and accomplishments of a programme, and how those results may reflect the values and ambitions of all programme stakeholders. They remind us of secondary effects and ‘unintended beneficiaries’ of programmes that may not be readily captured by results. Some also raise questions about the source of criteria over what counts as a worthwhile result given that not all programme achievements can be measured, and stakeholders may differ over a programme’s objectives. 

Against this background conference participants are invited to contribute their own perspectives on the dominant issues they consider relevant to the theory and practice of evaluation in the public interest. We anticipate a lively and informative debate to stimulate professional learning and to contribute to the improvement of evaluation practice and commissioning.

Potential contributors are invited to propose discussions, seminar presentations, lectures or poster sessions which explore issues around this theme. Those issues may fall within one of the following categories – though you are invited to propose your own theme:?

  • How do we define a valid ‘result’ and whose results get counted?
  • How do we best measure a result – including taking account of counterfactuals?
  • How do we understand where results came from, what significance they have and whether they can be replicated – i.e. what is the relation between a result and context?
  • Where do benchmarks come from to measure results achievement?
  • If a result is, say, a 4% improvement – how do we know whether that is a lot or a little under the circumstances?
  • How do we represent the circumstances and mechanisms that give rise to a result?
  • How do we account for programme accomplishments that are not represented in results?
  • Is results-measurement a robust foundation for replication/extension of a programme?

A formal call for papers and proposals for sessions will be circulated shortly.  The conference will be preceded on 15 March 2012 with a choice of training workshops on specialist topics.

3ie’s proposes a Commitment to Evaluation Indicator (c2e)

International Initiative for Impact Evaluation (3ie) -Terms of Reference for a Research Consultancy– White paper for the Commitment to Evaluation Indicator

“Background: Experience to date shows that the use of evidence by donors and governments when designing and adopting development programmes remains sporadic. There are many examples where a programme was shown to have no impact but was expanded, as well as examples of programmes with positive impact being terminated. To promote better use of evaluation evidence in policy making and programme design, 3ie is launching a Commitment to Evaluation (c2e) indicator. The indicator will provide a measurement of government and donor agency use of evaluation evidence allowing for recognition and reward for progress and good practice. The indicator will be developed and piloted in 2012 for donor agencies with the intent to recognize donors that make systematic use of evidence and thus motivate others to do the same.

3ie’s initiative follows the example of other successful efforts to use awards or indexes to focus the attention of policymakers. Indexes such as the UN Development Programme’s Human Development index, Transparency International’s Corruption Perception index, and the Centre for Global Development’s Quality of ODA (QuODA) index have raised awareness on key issues and influenced practice of governments and development agencies. The Mexican National Council for the Evaluation of Social Development Policy (CONEVAL) annual award for good practices in social evaluation has strengthened political buy-in and commitment to evaluation in Mexico. In developing this c2e indicator, 3ie will draw from the lessons learned by similar initiatives on how best to motivate and award evaluation practices and build and run an effective cross-agency and cross-country indicator. More detailed background information on the rationale and theory of change behind the project is available in the discussion note in the annex.” See ToRs for rest of the text including annex.

Diversity and Complexity

by Scott Page, 2011. Available on Google Books Princeton University Press, 14/07/2011 – 296 pages

Abstract: This book provides an introduction to the role of diversity in complex adaptive systems. A complex system–such as an economy or a tropical ecosystem–consists of interacting adaptive entities that produce dynamic patterns and structures. Diversity plays a different role in a complex system than it does in an equilibrium system, where it often merely produces variation around the mean for performance measures. In complex adaptive systems, diversity makes fundamental contributions to system performance. Scott Page gives a concise primer on how diversity happens, how it is maintained, and how it affects complex systems. He explains how diversity underpins system level robustness, allowing for multiple responses to external shocks and internal adaptations; how it provides the seeds for large events by creating outliers that fuel tipping points; and how it drives novelty and innovation. Page looks at the different kinds of diversity–variations within and across types, and distinct community compositions and interaction structures–and covers the evolution of diversity within complex systems and the factors that determine the amount of maintained diversity within a system.Provides a concise and accessible introduction. Shows how diversity underpins robustness and fuels tipping points .Covers all types of diversity. The essential primer on diversity in complex adaptive systems.

RD Comment: This book is very useful for thinking about the measurement of diversity. In 2000 I wrote a paper “Does Empowerment Start At Home? And If So, How Will We Recognise It?” in which I argued that…

“At the population level, diversity of behaviour can be seen as a gross indicator of agency (of the ability to make choices), relative to homogenous behaviour by the same set of people. Diversity of behaviour suggests there is a range of possibilities which individuals can pursue. At the other extreme is standardisation of behaviour, which we often associate with limited choice. The most notable example being perhaps that of an army. An army is a highly organised structure where individuality is not encouraged, and where standardised and predictable behaviour is very important. Like the term “NGO” or “non-profit”, diversity is defined by something that it is not –  a condition where there is no common constraint, which would otherwise lead to a homogeneity of response. Homogeneity of behaviour may arise from various sources of constraint. A flood may force all farmers in a large area to move their animals to the high ground. Everybody’s responses are the same, when compared to what they would be doing on normal day. At a certain time of the year all farmers may be planting the same crop. Here homogeneity of practice may reflect common constraints arising from a combination of sources: the nature of the physical environment, and the nature of particular local economies. Constraints on diversity can also arise within the assisting organisation. Credit programs can impose rules on loan use, specific repayment schedules and loan terms, as well as limiting when access to credit is available, or how quickly approval will be give.”

See also…

3ie and the Funding of Impact Evaluations

A DISCUSSION PAPER FOR 3IE’S MEMBERS. by Rick Davies, July 2011. Commissioned by the Office of Development Effectiveness, AusAI. Available as pdf.

The purpose of this discussion paper is to inform AusAID’s and other 3ie members’ engagement with 3ie (the International Initiative for Impact Evaluation). It precedes the forthcoming evaluation of 3ie, and is more limited in scope. It is expected to be complementary and useful to the larger Department for International Development (DFID) study now underway, Developing a broader range of rigorous designs and methods for impact evaluations, as the author of this report is also a member of that study team.

AusAID is a member of 3ie and provides core funding to 3ie to contribute to the global public good of policy-relevant evidence on what works in development. Direct benefit to AusAID is not the purpose of the membership. However, it is important to AusAID that 3ie’s work is relevant to AusAID’s partners, particularly partners with low income and/or in fragile countries. AusAID’s Office of Development Effectiveness (ODE) manages AusAID’s membership of 3ie and has commissioned this discussion paper.

The focus of this discussion paper is on 3ie methodological approach, used in both the funded impact evaluations and systematic reviews, and how this has changed over time. Continue reading “3ie and the Funding of Impact Evaluations”

RealWorld Evaluation Working Under Budget, Time, Data, and Political Constraints

Second Edition, by Michael Bamberger, Jim Rugh, Linda Mabry. Sage Publications,  Nov 2011,

This book addresses the challenges of conducting program evaluations in real-world contexts where evaluators and their clients face budget and time constraints and where critical data may be missing. The book is organized around a seven-step model developed by the authors, which has been tested and refined in workshops and in practice. Vignettes and case studies—representing evaluations from a variety of geographic regions and sectors—demonstrate adaptive possibilities for small projects with budgets of a few thousand dollars to large-scale, long-term evaluations of complex programs. The text incorporates quantitative, qualitative, and mixed-method designs, and this Second Edition reflects important developments in the field since the publication of the First Edition. ”

See also the associated website: http://www.realworldevaluation.org/ Bamberger and Rugh have presented many workshops on RealWorld Evaluation in many countries. A copy of various versions and translations of the PowerPoint presentations and other materials are accessible on the next pages of this website. Continue reading “RealWorld Evaluation Working Under Budget, Time, Data, and Political Constraints”

First reports published by UK Independent Commission for Aid Impact

…on 22nd November, 2011.

Two cover general areas of the programme:

ICAI’s Approach to Effectiveness and Value for Money; and

The Department for International Development’s (DFID) Approach to Anti-Corruption;

Two cover specific programmes in DFID’s country offices:

DFID’s Climate Change Programme in Bangladesh; and

DFID’s Support to the Health Sector in Zimbabwe.

See the ICAI website for further details

RD Comment: re “ICAI’s Approach to Effectiveness and Value for Money” paper, see my Comments here. In summary:•

  • This paper is confusingly titled. It is really about the ICAIs overall approach to evaluation, and covers more than “value for money and effectiveness”
  • The 4e’s analysis of the concepts of “value for money and effectiveness” has potential, but seems to be taken nowhere thereafter.
  • The proposed workings of the traffic light system are opaque. It is not clear how these judgements will be built up from subsidiary judgements. Nor what they mean in the simplest terms of success and failure. Nor is there a “None of the above. There is not sufficient information to make a judgement”

M&E Software: A List

Well, the beginnings of a list…

PLEASE NOTE: No guarantee can be given about the accuracy of information provided on the linked websites about the M&E software concerned, and its providers. Please proceed with due caution when downloading any executable programs.

Contents on this page: Stand alone systemsOnline systems | Survey supporting software | Sector specific tools | Qualitative data analysis | Data mining / Predictive ModellingProgram Logic / Theory of Change modelingDynamic models | Excel-based tools | Uncategorised and misc other

If you have any advice or opinions on any of the applications below, please tell us more via this survey.

Stand-alone systems

  • AidProject M+E for Donor-funded aid projects
  • Flamingo and Monitoring Organiser: “In order to implement FLAMINGO, it is crucial to first define the inputs (or resources available), activities, outputs and outcomes”
  • HIV/AIDS  Data Capturing And Reporting Platform[Monitoring and Evaluation System]
  • PacPlan: “Results-Based Planning, Monitoring and Evaluation Software and Process Solution”
  • Prome Web: A project management, monitoring and evaluation software. Adapted for aid projects in developing countries
  • Sigmah: “humanitarian project management open source software”

Online systems

  • Activity Info: “an online humanitarian project monitoring tool, which helps humanitarian organizations to collect, manage, map and analyze indicators. ActivityInfo has been developed to simplify reporting and allow for real-time monitoring”
  • AKVO: “a paid-for platform that covers data collection, analysis, visualisation and reporting”
  • Canva Mind Maps: “Create a mind map with Canva and bring your thoughts to life. Easy to use, completely online and completely free mind mapping software”
  • DevResults: “web-based project management tool specially designed for the international development community.” Including M&E, mapping, budgeting, checklists, forms, and collaboration facilities.
  • Granity: “Management and reporting software for Not-for-profits Making transparency easy”
  • IndiKit: Guidance on SMART indicators for relief and development programmes
  • Kashana: An open sourced, web-based Monitoring, Evaluation & Learning (MEL) product for development projects and organisations
  • Kinaki: “Kinaki is a unique and intuitive project design, data collection, analysis, reporting and sharing tool”
  • KI-PROJECTS™ MONITORING AND EVALUATION SOFTWARE:
  • Kobo Toolbox: “a free, more user-friendly way to deploy Open Data Kit surveys. It was developed with humanitarian purposes in mind, but could be used in various contexts (and not just for surveys). There is an Android data collection app that works offline”
  • Logalto:”Collaborative Web-Based Software for Monitoring and Evaluation of International Development Projects”
  • M&E Online: “Web-based monitoring and evaluation software tool”
  • Monitoring and Evaluation Online: Online Monitoring and Evaluation Software Tool
  • SmartME: “SmartME is a tried and tested comprehensive Fund Management and M&E software platform to manage funds better”
  • Systmapp: “cloud-based software that uses a patent-pending methodology to connect monitoring, planning, and knowledge management for international development organisations”
  • TolaData “is a program management and M&E platform that helps organisations create data-driven impact through the adaptive and timely management of projects”
  • WebMo: Web-based project monitoring for development cooperation

Survey supporting software

  • CommCare: a mobile data collection platform.
  • EthnoCorder is mobile multimedia survey software for your iPhone
  • HarvestYourData: iPad & Android Survey App for Mobile Offline Data Collection
  • KoBoToolbox is a suite of tools for field data collection for use in challenging environments. Free and open source
  • Magpi (formerly EpiSurvey)  – provides tools for mobile data collection, messaging and visualisation, lets anyone create an account, design forms, download them to phones, and start collecting data in minutes, for free.
  • Open Data Kit (ODK) is a free and open-source set of tools which help organizations author, field, and manage mobile data collection solution
  • REDCap,a secure web application for building and managing online surveys and databases… specifically geared to support online or offline data capture for research studies and operations
  • Sensemaker(c) “links micro-narratives with human sense-making to create advanced decision support, research and monitoring capability in both large and small organisations.”
  • Comparisons

Sector-specific tools

  • Mwater for WASH, which explicitly aims to make the data (in this case water quality). Free and open source
  • Adaptive Management Software for Conservation projects. https://www.miradi.org/

Qualitative data analysis

  • Dedooose, A cross-platform app for analyzing qualitative and mixed methods research with text, photos, audio, videos, spreadsheet data and more
  • Nvivo, powerful software for qualitative data analysis.
  • HyperRESEARCH “…gives you complete access and control, with keyword coding, mind-mapping tools, theory building and much more”.
  • Impact Mapper: “A new online software tool to track trends in stories and data related to social change”

Data mining / predictive modeling

  • RapidMiner Studio. Free and paid for versions. Data Access (Connect to any data source, any format, at any scale), Data Exploration (Quickly discover patterns or data quality issues). Data Blending (Create the optimal data set for predictive analysis), Data Cleansing (Expertly cleanse data for advanced algorithms), Modeling (Efficiently build and delivers better models faster), Validation (Confidently & accurately estimate model performance)
  • BigML. Free and paid for versions. Online service. “Machine learning made easy”
  • EvalC3: Tools for exploring and evaluating complex causal configurations, developed by Rick Davies (Editor of MandE NEWS). Free and available with Skype video support

Program Logic / Theory of Change modeling / Diagramming

  • Changeroo: “Changeroo assists organisations, programs and projects with a social mission to develop and manage high-quality Theories of Change”
  • Coggle:The clear way to share complex information
  • DAGitty: ” a browser-based environment for creating, editing, and analyzing causal models (also known as directed acyclic graphs or causal Bayesian networks)”
  • Decision Explorer: a  tool for managing “soft” issues – the qualitative information that surrounds complex or uncertain situations.
  • DCED’s Evidence Framework – more a way of using a website than software as such, but definitely an approach that is replicable by others.
  • DoView – Visual outcomes and results planning
  • Draw.io:
  • Dylomo: ” a free* web-based tool that you can use to build and present program logic models that you can interact with”
  • IdeaTree – Simultaneous Collaboration & Brainstorming Using Mind Maps
  • Insight Maker: “…express your thoughts using rich pictures and causal loop diagrams. … turn these diagrams into powerful simulation models.”
  • Kumu: a powerful data visualization platform that helps you organize complex information into interactive relationship maps.
  • Logframer 1.0 “a free project management application for projects based on the logical framework method”
  • LucidChart: Diagrams done right. Diagram and collaborate anytime on any device
  • Netway: a cyberinfrastructure designed to support collaboration on the development of program models and evaluation plans, provide connection to a virtual community of related programs, outcomes, measures and practitioners, and to provide quick access to resources on evaluation planning
  • Omnigraffle: for creating precise, beautiful graphics: website wireframes, electrical systems, family trees and maps of software classes
  • Theory maker: a free web app by Steve Powell for making any kind of causal diagram, i.e. a diagram which uses arrows to say what contributes to what.
  • TOCO – Theory of Change Online. A free version is available.
  • Visual Understanding Environment (VUE): open source ‘mind mapping’ freeware from Tufts Univ.
  • yEd – diagram editor that can be used to generate drawings of diagrams.  FREE. PS: There is now a web-based version of this excellent network drawing application

Dynamic models

  • CCTools: Map and steer complex systems, using Fuzzy Cognitive Maps and others [ This site is currently under reconstruction]
  • Loopy: A tool for thinking in systems
  • Mental Modeller: FCM modeling software that helps individuals and communities capture their knowledge in a standardized format that can be used for scenario analysis.
  • FCM Expert: Experimenting tools for Fuzzy Cognitive Maps
  • FCMapper: the first available FCM analysis tool based on MS Excel and FREE for non-commercial use.
  • FSDM: Fuzzy Systems Dynamics Model Implemented with a Graphical User Interface

Mind-Mapping software (tree diagrams)

  • MindView: “a professional mind mapping software that allows you to visually brainstorm, organize and present ideas.”
  • XMind: “mind mapping and brainstorming tool, designed to generate ideas, inspire creativity, brings you efficiency both in work and life.”
  • MindManager: “
  • Plectica: “Diagram your thinking in real time, together”

Collaboration software

  • Miro:  which can be used to make a collaborative ToC.

Excel-based tools

  • EvalC3: …tools for developing, exploring and evaluating predictive models of expected outcomes, developed by Rick Davies (Editor of MandE NEWS). Free and available with Skype video support

Uncategorised yet

  • OpenRefine: Formerly called Google Refine is a powerful tool for working with messy data: cleaning it; transforming it from one format into another; and extending it with web services and external data.
  • Overview is an open-source tool originally designed to help journalists find stories in large numbers of documents, by automatically sorting them according to topic and providing a fast visualization and reading interface. It’s also used for qualitative research, social media conversation analysis, legal document review, digital humanities, and more. Overview does at least three things really well.
Other lists
Other other

The Impact of Economics Blogs

David McKenzie (World Bank, BREAD, CEPR and IZA) and Berk Özler (World Bank). Policy Research Working Paper 5783. August 2011. Available as pdf. See also the authors’ blog about this paper.

Introduction: Practically nonexistent a decade ago, blogs by economic scholars have become commonplace. Economics blogs, such as Freakonomics, Marginal Revolution, Paul Krugman and Greg Mankiw, have built large followings – whether measured by subscriptions in Google Reader or by average daily page views (1). Cowen (2008) argues that blogs are the main way that the general public consumes economics in a given day and guesstimates that “…about 400,000 people are reading economics blogs and digesting them” on a daily basis.

These blogs not only give their creators an outlet to disseminate their ideas and work immediately in a format that is more accessible, but also enable instant feedback, are easy to share on the open web, and allow the bloggers a personal style rather than the inaccessible format of academic journals (Glenn, 2003; Dunleavy and Gilson 2011).

Our motivation in examining the impact of economics blogs stems from two observations about blogs and questions that arise from these. First, it seems fair to state that “…informing is the core business of blogging.” (McKenna and Pole 2008, p. 102) This leads to the question of whether blogs improve the dissemination of research findings and whether their readers are indeed more informed (2). On the one hand, coupling the large readership of blogs with the argument of Cowen (2008) that the best ones are written at a level far higher than that of any major newspapers offers the promise that economics blogs may have sizeable effects on the dissemination of economic research and on the knowledge and attitudes of their readers.
Continue reading “The Impact of Economics Blogs”

THE EVALUATION OF ORGANIZATION PERFORMANCE: NORMATIVE PRESCRIPTIONS VS. EMPIRICAL RESULTS

Vic Murray, University of Victoria, 2004. Available as pdf

Abstract: This paper reviews the underlying theoretical bases for the evaluation of organizational performance. It then examines representative samples of empirical research into actual evaluation practices in a variety of nonprofits in Canada, the U.S. and Britain. Some of the most popular tools and systems for evaluation currently recommended by consultants and others are then reviewed. Looking at this prescriptive literature, it is shown that, by and large, it takes little account of the findings of empirical research and, as a result, its approaches may often prove ineffective. An alternative that attempts to integrate the research findings with practical tools that has value for practitioners is then be suggested.

Introduction

It is a perplexing, but not uncommon, phenomenon in the world of nonprofit organization studies how little connection there is between the work of those who offer advice on how organizations in this sector might become more effective and that of those who carry out formally designed empirical research into how these organizations actually behave.  Nowhere is this gap between  “how to” and “what is” more apparent than in the field of performance assessment and evaluation.

%d bloggers like this: