Launch of online database of research accountability tools

Announcement: 7 September: launch of online database of research accountability tools

The One World Trust, with support from the IDRC, has created an interactive, online database of tools to help organisations conducting policy relevant research become more accountable.

Processes of innovation and research are fundamental to improvements in quality of life and to creating a better society. But to realise these benefits, the quality of research alone is not enough. Organisations engaged in policy-relevant research and innovation must continually take into account and balance the needs of a diverse set of stakeholders: from the intended research users, to their clients and donors, to the research community and the research participants.  Responsiveness to all of these is crucial if they are to be legitimate and effective. In this, accountable processes are as important as high quality research products.

The Trust has built the online accountability database to support researchers, campaigners and research managers to think through the way they use evidence to influence policy in an accountable way. The database takes into account that research organisations are increasingly diverse – they are no longer just  universities, but private companies, public institutes and non-profit think-tanks. No single framework can encompass this diversity.

Instead, the database provides an inventory of over two hundred tools, standards and processes within a broad, overarching accountability framework. With a dynamic interface and several search functions, it allows users to identify aspects of accountability that interests them, and provides ideas to improve their accountability in this context. Each tool is supported by sources and further reading.

We also encourage engagement with and discussion on the database content, through allowing users to comment on individual tools, or to submit their own tools, processes and standards for inclusion.

The database is an output of a three-year project, titled “Accountability Principles for Research Organisations.” Working with partners across the globe, the project has generated an accountability framework which is sufficiently flexible to apply to many contexts and different organisations.

The database will be available online from the 7 September.

For more information about the project please feel free to contact us at bwhitty@oneworldtrust.org. For the database, please visit www.oneworldtrust.org/apro

Conference: Systemic Approaches in Evaluation

Date: January 25th/ 26th 2011
Venue: Eschborn, Germany

Call for Papers & Save the Date

Development programs promote complex reforms and change processes. Today, such processes are characterized more than ever by insecurity and unpredictability, posing a big challenge to the evaluation of development projects. In order to understand which projects work, why and under which conditions, evaluations need to embrace the interaction of various influencing factors and the multi-dimensionality of societal change. However, present evaluation approaches often premise predictability and linearity of event chains. They reflect the natural human need for security but are often not suitable to comprehend complex situations.

In order to fill this gap systemic approaches in evaluation of development programs are increasingly being discussed. A key concept is interdependency instead of linear cause-effect-relations. Systemic evaluations look at interrelations instead of analyzing isolated facts and figures. They focus on the interaction between various stakeholders with different motivations, interests, perceptions, and perspectives.

On January 25th and 26th, the Evaluation Unit of GTZ offers a forum to discuss systemic approaches to evaluation on an international conference with participants from politics, science and practice. On the basis of presentations, discussion rounds and case studies we will tackle, amongst others, the following questions:

·         What characterizes a “systemic evaluation“?
·         What is new about systemic evaluations, what makes them different from other (e.g.  participatory) approaches?
·         For which kind of evaluations are systemic approaches (not) useful?
·         Which concrete methods and tools from systemic consulting can be used?
·         Which quality standards do systemic evaluations have to meet?
·         Which specific methods and tools from systemic consultation practice can be used in systemic evaluation?

We welcome contributions on good practice examples and/or systemic tools for evaluation. Please submit your proposals (1000 words maximum) in English by October 31st to Sabine Dinges (sabine.dinges@gtz.de).

We look forward to receiving your abstracts. Further information on the registration process will soon be provided.

Martina Vahlhaus, Head of Evaluation Unit ,
Michael Gajo ,  Senior Evaluation Officer
gtz German Technical Cooperation
P.O. 5180
65726 Eschborn

Theory Construction and Model-Building Skills: A Practical Guide for Social Scientists

By James Jaccard PhD , Jacob Jacoby PhD, Guilford Press, 2010. Available on Amazon. Found courtesy of a tweet by EvalCollective.

See also the book review by Brandy Pratt, Western Michigan University, Journal of MultiDisciplinary Evaluation, Volume 6, Number 14, ISSN 1556-8180, August 2010

Amazon book description:  Meeting a crucial need for graduate students and newly minted researchers, this innovative text provides hands-on tools for generating ideas and translating them into formal theories. It is illustrated with numerous practical examples drawn from multiple social science disciplines and research settings. The authors offer clear guidance for defining constructs, thinking through relationships and processes that link constructs, and deriving new theoretical models (or building on existing ones) based on those relationships. Step by step, they show readers how to use causal analysis, mathematical modeling, simulations, and grounded and emergent approaches to theory construction. A chapter on writing about theories contains invaluable advice on crafting effective papers and grant applications.

Useful pedagogical features in every chapter include:

*Application exercises and concept exercises.

*Lists of key terms and engaging topical boxes.

*Annotated suggestions for further reading.”

US Office of Management and Budget: Increased emphasis on Program Evaluations

Via Xceval: No exactly breaking news (11 months later), but still likely to be of wide interest:

October 7, 2009
M-10-01
MEMORANDUM FOR THE HEADS OF EXECUTIVE DEPARTMENTS AND AGENCIES
FROM: Peter R. Orszag
Director
SUBJECT: Increased Emphasis on Program Evaluations

Rigorous, independent program evaluations can be a key resource in determining whether government programs are achieving their intended outcomes as well as possible and at the lowest possible cost. Evaluations can help policymakers and agency managers strengthen the design and operation of programs. Ultimately, evaluations can help the Administration determine how to spend taxpayer dollars effectively and efficiently — investing more in what works and less in what does not.
Although the Federal government has long invested in evaluations, many important programs have never been formally evaluated — and the evaluations that have been done have not sufficiently shaped Federal budget priorities or agency management practices. Many agencies lack an office of evaluation with the stature and staffing to support an ambitious, strategic, and relevant research agenda. As a consequence, some programs have persisted year after year without adequate evidence that they work. In some cases, evaluation dollars have flowed into studies of insufficient rigor or policy significance. And Federal programs have rarely evaluated multiple approaches to the same problem with the goal of identifying which ones are most effective.

To address these issues and strengthen program evaluation, OMB will launch the following government-wide efforts as part of the Fiscal Year 2011 Budget process: ….(read the full text in this pdf)

Measuring Results for Dutch Development Aid, Approaches and Future Directions

Date: October 4-7, 2010
Venue: Royal Tropical Institute, Amsterdam,

The International Institute of Social Studies and The Amsterdam Institute for International Development invite applications / submissions for a training and conference event on Measuring Results for Dutch Development Aid, Approaches and Future Directions with financial support from the Policy and Operations Evaluation Department of the Dutch Ministry of Development Cooperation.

Participation is free of charge, but places are limited.
Deadline for applications: September 10, 2010
Click here to apply

Objectives: Share results from and experiences with impact evaluation in developing countries, and discuss their relevance for Dutch development cooperation.

Target Audiences: Researchers, NGOs, consulting companies and policy makers in the Netherlands conducting or using impact evaluation to study the effectiveness of development assistance.

Confirmed speakers: Dr. Howard White, director of International Initiative for Impact Evaluation(3ie).
Dr. Paul Gertler, Professor of Economics, University of California, Berkeley.
Dr. Sulley Gariba, Executive Director, Institute for Policy Alternatives, Ghana.
Prof. Ruerd Ruben, director of the Policy and Operations Evaluation Department of the Dutch Ministry of Development Cooperation (starting Sept 1).

Submit a paper (optional): Contributed papers are sought in the area of (1) completed impact evaluations, (2) reviews of impact evaluations on a particular sector, (3) position papers on approaches to impact evaluations in relation to decision making.

Selection criteria: Quality of submission and/or professional link with result assessment for development assistance and/or participation in the impact evaluation training.

Maximum number of participants: 100

PROGRAM »

Measuring Up: HIV-related advocacy evaluation training pack (draft)

HIV-related advocacy evaluation training for civil society organisations.

Produced by the International HIV/AIDS Alliance (Secretariat), International Council of AIDS Service Organizations (ICASO), July 2010, 38 pages. Available as .pdf

“This training pack is published by the Alliance and the International Council of AIDS Service Organizations (ICASO) and consists of two guides designed for advocacy, monitoring and evaluation staff of civil society organisations (including networks) who are involved in designing, implementing and assessing advocacy projects at different levels. The purpose of these guides is to increase users’ capacity to evaluate the progress and results of their advocacy work. The guides aim to:

1. help users to identify and confront the challenges faced by community-based organisations evaluating HIV-related advocacy
2. introduce new thinking for designing advocacy evaluations
3. give users the opportunity to apply some aspects of the evaluation design process to their specific contexts
4. make users aware that advocacy evaluation is a fast-growing and evolving field, with a large number of publications on advocacy evaluation design, approaches and methods available via the Internet and summarised in the resources section of the learner’s guide.”

Updated MSC bibliography

PLEASE NOTE. The bibliography below has now been superseded by a more comprehensive bibliography here. This now includes pdf copies of many of the papers plus a search facility. It will continue to be updated

This (now older) page is intended to provide  an update of the bibliography in the 2005 Most Significant Change technique (MSC) Users Guide

Please feel free to suggest additions to this list, through the Comment facility below, or by emailing the editor (Rick Davies)

Papers

 

Powerpoints

  • Seven sets of slides, used for 2 day MSC training in Delhi, 2008 by Rick Davies . Available on request , on condition of willingness to share any adaptations made

YouTube video

Other

 

Designing Initiative Evaluation A Systems-oriented Framework for Evaluating Social Change Efforts

W. K. Kellogg Foundation, 2007. 48 pages. Available as pdf.

Purpose”

“This document is designed for use by external evaluators who conduct initiative evaluations for theW.K. Kellogg Foundation (WKKF) – and, hopefully, other foundations and government agencies. It presents a systems-oriented framework and four general designs for initiative and cluster evaluation. The designs are based on systems concepts related to change and the dynamics of systems. The focus is not on considering all ideas about systems that could be applied to initiative evaluation, rather on how different dynamics within systems can serve as the basis for initiative evaluation designs.” Continue reading “Designing Initiative Evaluation A Systems-oriented Framework for Evaluating Social Change Efforts”

UK Evaluation Society 2010 Annual Evaluation Conference

Evaluation in a turbulent world: Challenges, opportunities and innovation in evaluation practice
Date: 22-23 November 2010
Venue: Macdonald Burlington Hotel, Birmingham

Abstracts are now invited for this year’s UKES Annual Evaluation Conference. The on-line submission form is available via the conference website www.profbriefings.co.uk/ukes2010 The closing date for receipt of submissions is 13 August 2010.

With the effects of the financial crisis still being felt, and with a new coalition government in Number 10, many evaluators find themselves operating in a very different policy environment. In particular the rhetoric has changed from tackling the crisis (a central theme of last year’s conference looking at impact) to talk of austerity and cutting back the public sector, a major source of sponsorship for evaluation. While this environment offers tough challenges, it also presents opportunities – in particular in the development and promotion of new evaluation methodologies, relationships and approaches. More than ever there will be a need to assess what is of value, what has quality and in what circumstances evaluation can contribute to informed policy-making and debate. Continue reading “UK Evaluation Society 2010 Annual Evaluation Conference”

The American Evaluation Association annual conference: Evaluation Quality

Date: November 10-13, 2010
Venue: San Antonio, Texas

The American Evaluation Association invites evaluators from around the world to attend its annual conference to be held Wednesday, November 10, through Saturday, November 13, 2010 in San Antonio, Texas. We’ll be convening at the lovely Grand Hyatt San Antonio, right in the heart of the vibrant city and adjacent to the Riverwalk’s nightlife, restaurants, and strolling grounds. Discounted hotel reservations will be available in March.

AEA’s annual meeting is expected to bring together approximately 2500 evaluation practitioners, academics, and students, and represents a unique opportunity to gather with professional colleagues in a collaborative, thought-provoking, and fun atmosphere.

The conference is broken down into 44 Topical Strands that examine the field from the vantage point of a particular methodology, context, or issue of interest to the field as well as the Presidential Strand highlighting this year’s Presidential Theme of Evaluation Quality. Presentations may explore the conference theme or any aspect of the full breadth and depth of evaluation theory and practice.

Proposals are due by midnight in the Eastern time zone, on Friday, March 19, 2010.
For more information: http://www.eval.org/eval2010/10cfp.htm