Theory Construction and Model-Building Skills: A Practical Guide for Social Scientists

By James Jaccard PhD , Jacob Jacoby PhD, Guilford Press, 2010. Available on Amazon. Found courtesy of a tweet by EvalCollective.

See also the book review by Brandy Pratt, Western Michigan University, Journal of MultiDisciplinary Evaluation, Volume 6, Number 14, ISSN 1556-8180, August 2010

Amazon book description:  Meeting a crucial need for graduate students and newly minted researchers, this innovative text provides hands-on tools for generating ideas and translating them into formal theories. It is illustrated with numerous practical examples drawn from multiple social science disciplines and research settings. The authors offer clear guidance for defining constructs, thinking through relationships and processes that link constructs, and deriving new theoretical models (or building on existing ones) based on those relationships. Step by step, they show readers how to use causal analysis, mathematical modeling, simulations, and grounded and emergent approaches to theory construction. A chapter on writing about theories contains invaluable advice on crafting effective papers and grant applications.

Useful pedagogical features in every chapter include:

*Application exercises and concept exercises.

*Lists of key terms and engaging topical boxes.

*Annotated suggestions for further reading.”

US Office of Management and Budget: Increased emphasis on Program Evaluations

Via Xceval: No exactly breaking news (11 months later), but still likely to be of wide interest:

October 7, 2009
M-10-01
MEMORANDUM FOR THE HEADS OF EXECUTIVE DEPARTMENTS AND AGENCIES
FROM: Peter R. Orszag
Director
SUBJECT: Increased Emphasis on Program Evaluations

Rigorous, independent program evaluations can be a key resource in determining whether government programs are achieving their intended outcomes as well as possible and at the lowest possible cost. Evaluations can help policymakers and agency managers strengthen the design and operation of programs. Ultimately, evaluations can help the Administration determine how to spend taxpayer dollars effectively and efficiently — investing more in what works and less in what does not.
Although the Federal government has long invested in evaluations, many important programs have never been formally evaluated — and the evaluations that have been done have not sufficiently shaped Federal budget priorities or agency management practices. Many agencies lack an office of evaluation with the stature and staffing to support an ambitious, strategic, and relevant research agenda. As a consequence, some programs have persisted year after year without adequate evidence that they work. In some cases, evaluation dollars have flowed into studies of insufficient rigor or policy significance. And Federal programs have rarely evaluated multiple approaches to the same problem with the goal of identifying which ones are most effective.

To address these issues and strengthen program evaluation, OMB will launch the following government-wide efforts as part of the Fiscal Year 2011 Budget process: ….(read the full text in this pdf)

Measuring Results for Dutch Development Aid, Approaches and Future Directions

Date: October 4-7, 2010
Venue: Royal Tropical Institute, Amsterdam,

The International Institute of Social Studies and The Amsterdam Institute for International Development invite applications / submissions for a training and conference event on Measuring Results for Dutch Development Aid, Approaches and Future Directions with financial support from the Policy and Operations Evaluation Department of the Dutch Ministry of Development Cooperation.

Participation is free of charge, but places are limited.
Deadline for applications: September 10, 2010
Click here to apply

Objectives: Share results from and experiences with impact evaluation in developing countries, and discuss their relevance for Dutch development cooperation.

Target Audiences: Researchers, NGOs, consulting companies and policy makers in the Netherlands conducting or using impact evaluation to study the effectiveness of development assistance.

Confirmed speakers: Dr. Howard White, director of International Initiative for Impact Evaluation(3ie).
Dr. Paul Gertler, Professor of Economics, University of California, Berkeley.
Dr. Sulley Gariba, Executive Director, Institute for Policy Alternatives, Ghana.
Prof. Ruerd Ruben, director of the Policy and Operations Evaluation Department of the Dutch Ministry of Development Cooperation (starting Sept 1).

Submit a paper (optional): Contributed papers are sought in the area of (1) completed impact evaluations, (2) reviews of impact evaluations on a particular sector, (3) position papers on approaches to impact evaluations in relation to decision making.

Selection criteria: Quality of submission and/or professional link with result assessment for development assistance and/or participation in the impact evaluation training.

Maximum number of participants: 100

PROGRAM »

Measuring Up: HIV-related advocacy evaluation training pack (draft)

HIV-related advocacy evaluation training for civil society organisations.

Produced by the International HIV/AIDS Alliance (Secretariat), International Council of AIDS Service Organizations (ICASO), July 2010, 38 pages. Available as .pdf

“This training pack is published by the Alliance and the International Council of AIDS Service Organizations (ICASO) and consists of two guides designed for advocacy, monitoring and evaluation staff of civil society organisations (including networks) who are involved in designing, implementing and assessing advocacy projects at different levels. The purpose of these guides is to increase users’ capacity to evaluate the progress and results of their advocacy work. The guides aim to:

1. help users to identify and confront the challenges faced by community-based organisations evaluating HIV-related advocacy
2. introduce new thinking for designing advocacy evaluations
3. give users the opportunity to apply some aspects of the evaluation design process to their specific contexts
4. make users aware that advocacy evaluation is a fast-growing and evolving field, with a large number of publications on advocacy evaluation design, approaches and methods available via the Internet and summarised in the resources section of the learner’s guide.”

Updated MSC bibliography

PLEASE NOTE. The bibliography below has now been superseded by a more comprehensive bibliography here. This now includes pdf copies of many of the papers plus a search facility. It will continue to be updated

This (now older) page is intended to provide  an update of the bibliography in the 2005 Most Significant Change technique (MSC) Users Guide

Please feel free to suggest additions to this list, through the Comment facility below, or by emailing the editor (Rick Davies)

Papers

 

Powerpoints

  • Seven sets of slides, used for 2 day MSC training in Delhi, 2008 by Rick Davies . Available on request , on condition of willingness to share any adaptations made

YouTube video

Other

 

Designing Initiative Evaluation A Systems-oriented Framework for Evaluating Social Change Efforts

W. K. Kellogg Foundation, 2007. 48 pages. Available as pdf.

Purpose”

“This document is designed for use by external evaluators who conduct initiative evaluations for theW.K. Kellogg Foundation (WKKF) – and, hopefully, other foundations and government agencies. It presents a systems-oriented framework and four general designs for initiative and cluster evaluation. The designs are based on systems concepts related to change and the dynamics of systems. The focus is not on considering all ideas about systems that could be applied to initiative evaluation, rather on how different dynamics within systems can serve as the basis for initiative evaluation designs.” Continue reading “Designing Initiative Evaluation A Systems-oriented Framework for Evaluating Social Change Efforts”

UK Evaluation Society 2010 Annual Evaluation Conference

Evaluation in a turbulent world: Challenges, opportunities and innovation in evaluation practice
Date: 22-23 November 2010
Venue: Macdonald Burlington Hotel, Birmingham

Abstracts are now invited for this year’s UKES Annual Evaluation Conference. The on-line submission form is available via the conference website www.profbriefings.co.uk/ukes2010 The closing date for receipt of submissions is 13 August 2010.

With the effects of the financial crisis still being felt, and with a new coalition government in Number 10, many evaluators find themselves operating in a very different policy environment. In particular the rhetoric has changed from tackling the crisis (a central theme of last year’s conference looking at impact) to talk of austerity and cutting back the public sector, a major source of sponsorship for evaluation. While this environment offers tough challenges, it also presents opportunities – in particular in the development and promotion of new evaluation methodologies, relationships and approaches. More than ever there will be a need to assess what is of value, what has quality and in what circumstances evaluation can contribute to informed policy-making and debate. Continue reading “UK Evaluation Society 2010 Annual Evaluation Conference”

The American Evaluation Association annual conference: Evaluation Quality

Date: November 10-13, 2010
Venue: San Antonio, Texas

The American Evaluation Association invites evaluators from around the world to attend its annual conference to be held Wednesday, November 10, through Saturday, November 13, 2010 in San Antonio, Texas. We’ll be convening at the lovely Grand Hyatt San Antonio, right in the heart of the vibrant city and adjacent to the Riverwalk’s nightlife, restaurants, and strolling grounds. Discounted hotel reservations will be available in March.

AEA’s annual meeting is expected to bring together approximately 2500 evaluation practitioners, academics, and students, and represents a unique opportunity to gather with professional colleagues in a collaborative, thought-provoking, and fun atmosphere.

The conference is broken down into 44 Topical Strands that examine the field from the vantage point of a particular methodology, context, or issue of interest to the field as well as the Presidential Strand highlighting this year’s Presidential Theme of Evaluation Quality. Presentations may explore the conference theme or any aspect of the full breadth and depth of evaluation theory and practice.

Proposals are due by midnight in the Eastern time zone, on Friday, March 19, 2010.
For more information: http://www.eval.org/eval2010/10cfp.htm

RESOURCE PACK ON SYSTEMATIZATION OF EXPERIENCES

ActionAid International, 2009, 104 pages.  Available as pdf 3.39Mb

See also the associated AAI website on systematization

Systematization is a methodology that offers a way to do all of the above. It allows us to:

  • Organise and document what we have learnt through our work
  • Better understand the impact of our work and the ways in which change happens
  • Develop deeper understanding about our work and the challenges we face to inform new ways of working
  • Capture and communicate the complexity and richness of our work
Systematization “helps people involved in different kinds of practice to organize and communicate what they have learned. We are talking about …so called …. lessons learned, about which everybody talks nowadays, but are not so easy to produce.” (AAI systematization resource pack, pg. 1, 2009)”

Critique of Governance Assessment Applications

GRDC Helpdesk Research Report by Sumedh Rao, Governance and Social Development Resource Centre, July 2010. 16 pages. Available as pdf

Query:  Identify the key literature that critiques the use and application of governance assessments.  Enquirer: DFID

Contents
1. Overview
2. General critiques
3. Critiques of measurement
4. Worldwide Governance Indicators (WGI)
5. African Peer Review Mechanism (APRM)
6. Other assessments
7. Donor Guidance
8. Initiatives for improving assessments

Including a bibliography of 39 annotated references Continue reading “Critique of Governance Assessment Applications”

%d bloggers like this: