Workshop: Case Studies in Development Evaluation: Validity, Generalisation and Learning

Venue: University of Copenhagen
Date: May 21-23, 2012
Invitation and Call for Papers to International Workshop
Centre for Social Science Development Research

The Evaluation Department of the Danish Ministry of Foreign Affairs and the Centre for Social Science Development Research at Institute of Food and Resource Economics at the University of Copenhagen are pleased to invite you to submit an abstract (preparatory to a full paper) to an International Workshop, which will focus on methodological and practical considerations when using case studies in evaluations of development. The workshop will be organised in collaboration with the journal Evaluation:the international journal of theory, research and practice.

A large number of development evaluations are broader, learning-oriented evaluations based on cases at country, sector or project level. A key challenge in these evaluations is how to deal with the question of external validity. Within the field of development evaluation the methodological and practical debate on how to address this issue has been relatively limited.

The Organising Committee of the International Workshop is seeking abstracts that address theoretical/methodological challenges as well as more practical experiences when using case studies in learning-oriented development evaluations. Abstracts could e.g. focus on: Continue reading “Workshop: Case Studies in Development Evaluation: Validity, Generalisation and Learning”

Evaluating the Complex: Attribution, Contribution and Beyond.

Kim Forss, Mita Marra and Robert Schwartz, editors. Transaction Publishers, New Brunswick. May 2011. Available via Amazon

“Problem-solving by policy initiative has come to stay. Overarching policy intiatives are now standard modus operandi for governmental and non-governmental organisations. But complex policy initiatives are not only reserved for the big challenges of our times, but are used for matters such as school achievement, regional development, urban planning, public health and safety. As policy and the ensuing implementation tends to be more complex than simple project and programme management, the task of  evaluation has also become more complex.”

“The book begins with a theoretical and conceptual explanation of complexity and how that affects evaluation. The authors make the distinction between, on the hand, the common-sense understanding of complexity  as something that is generally messy, involves many actors and has unclear boundaries and overlapping roles; and on the hand, complexity as a specific term from systems sciences, which implies non-linear relationships between phenomena. It is particularly in the latter sense that an understanding of complexity has a bearing on evaluation design in respect of how evaluators approach the question of impact.”

“The book presents nine case studies that cover a wide variety of policy initiatives, in public health (smoking prevention), homelessness, child labour, regional development, international development cooperation, the HIV/AIDs pandemic, and international development cooperation. The use of case studies sheds light on the conceptual ideas at work in organisations addressing some of the world’s largest and most varied problems.”

“The evaluation processes described here commonly seek a balance between order and chaos. The interaction of four elements – simplicity, inventiveness, flexibility, and specificity – allows complex platterns to emerge. The case studies illustrate this framework and provide a number of examples of practical management of complexity in light of contingency theories of the evaluation process itself. These theories in turn match the complexity of the evaluated policies, strategies and programmes. The case studies do not pretend to illustrate perfect evaluation processes, the focus is on learning and on seeking patterns that have proved satisfactory and where the evaluation findings have been robust an trustworthy.”

“The contingency theory approach of the book underscores a point also made in the Foreword by Professor Elliot Stern: “In a world characterised by interdependence, emergent proerties, unpredictable change, and indeterminate outcomes, how could evaluation be immune?” The answer lies in the choice of methods as much as in the overall strategy and approach of  evaluation.”

DFID&UKES Workshop on Development and Evaluation: Practical Ways Forward.

 

Date:  WEDNESDAY 12 OCTOBER 2011
Venue: BIS Conference Centre, Victor ia, London

Objectives:

  • To examine the key contributions of evaluation to international development
  • To provide an update on the accountability framework for evaluation in the UK
  • To explore the role of professional development in building evaluation capacity

THIS ONE DAY EVENT will raise important issues in the world of development and evaluation. The workshop will offer the chance to hear from senior practitioners and will cover the theory and reality as experienced in many contexts. It will update the accountability framework with particular reference to HM Treasury Guidance for Evaluation (the Magenta Book).

A major challenge for organisations is to develop their own staff as evaluation professionals. UKES will offer international insights as well as an update on its own guidance. DFID will report on how it is going about building its own community of evaluators. These will be presented alongside those from the NGO and voluntary sector. The day is relevant to all individuals and organisations with an interest and experience of development and evaluation, including: Donors, Consultants, Public and private sector representatives, Academics, A wide range of professionals

Programme
The workshop will commence at 09.00 and close at 17.30.
Highlights will include:

  • Updates on the Independent Commission for Aid Impact (ICAI),  HM Treasury’s Magenta Book and the Cross Government Evaluation Group (CGEG)
  • How to evaluate in fragile states, conlict environments and other challenging situations
  •  Case studies of evaluation at different levels: national and local,  sector specific
  • How to build professional capacity: use of accreditation and adapting to it a range of organisations at government and civil society level

Registration
The workshop will be held at the BIS Conference Centre, 1 Victoria, Street, London SW1H OET.
The registration fees are as follows:
UKES members  £75.00 + VAT
Non-members  £100.00 + VAT
Registration and the full programme for the workshop are available from the website  www.profbriefings.co.uk/depwf
For any further information, contact the workshop administrators:
Professional Brieings
37 Star Street
Ware
Hertfordshire SG12 7AA
Telephone:
01920 487672
Email:  london@profbrieings.co.uk

Sound expectations: from impact evaluations to policy change

3ie Working paper # 12, 2011, by Center for the Implementation of Public Policies Promoting Equity and Growth (CIPPEC) Emails: vweyrauch@cippec.org, gdiazlangou@cippec.org

Abstract

“This paper outlines a comprehensive and flexible analytical conceptual framework to be used in the production of a case study series. The cases are expected to identify factors that help or hinder rigorous impact evaluations (IEs) from influenc ing policy and improving policy effectiveness. This framework has been developed to be adaptable to the reality of developing countries. It is aimed as an analytical-methodological tool which should enable researchers in producing case studies which identify factors that affect and explain impact evaluations’ policy influence potential. The approach should also enable comparison between cases and regions to draw lessons that are relevant beyond the cases themselves.

There are two different , though interconnected, issues that must be dealt with while discussing the policy influence of impact evaluations. The first issue has to do with the type of policy influence pursued and, aligned with this, the determination of the accomplishment (or not) of the intended influence. In this paper, we first introduce the discussion regarding the different types of policy influence objectives that impact evaluations usually pursue, which will ultimately help determine whether policy influence was indeed achieved. This discussion is mainly centered around whether an impact evaluation has had impact on policy. The second issue is related to the identification of the factors and forces that mediate the policy influence efforts and is focused on why the influence was achieved or not. We have identified and systematized the mediating factors and forces, and we approach them in this paper from the demand and supply perspective, considering as well, the intersection between these two.

The paper concludes that, ultimately, the fulfillment of policy change based on the results of impact evaluations is determined by the interplay of the policy influenc e objectives with the factors that affect the supply and demand of research in the policymaking process.

The paper is divided in four sections. A brief introduction is followed by an analysis of policy influence as an objective of research, specifically, impact evaluations. The third section identifies factors and forces that enhance or undermine influence in public policy decision making. The research ends up pointing out the importance of measuring policy influence and enumerates a series of challenges that have to be further assessed.”

%d bloggers like this: