Evaluating the Complex: Attribution, Contribution and Beyond.

Kim Forss, Mita Marra and Robert Schwartz, editors. Transaction Publishers, New Brunswick. May 2011. Available via Amazon

“Problem-solving by policy initiative has come to stay. Overarching policy intiatives are now standard modus operandi for governmental and non-governmental organisations. But complex policy initiatives are not only reserved for the big challenges of our times, but are used for matters such as school achievement, regional development, urban planning, public health and safety. As policy and the ensuing implementation tends to be more complex than simple project and programme management, the task of  evaluation has also become more complex.”

“The book begins with a theoretical and conceptual explanation of complexity and how that affects evaluation. The authors make the distinction between, on the hand, the common-sense understanding of complexity  as something that is generally messy, involves many actors and has unclear boundaries and overlapping roles; and on the hand, complexity as a specific term from systems sciences, which implies non-linear relationships between phenomena. It is particularly in the latter sense that an understanding of complexity has a bearing on evaluation design in respect of how evaluators approach the question of impact.”

“The book presents nine case studies that cover a wide variety of policy initiatives, in public health (smoking prevention), homelessness, child labour, regional development, international development cooperation, the HIV/AIDs pandemic, and international development cooperation. The use of case studies sheds light on the conceptual ideas at work in organisations addressing some of the world’s largest and most varied problems.”

“The evaluation processes described here commonly seek a balance between order and chaos. The interaction of four elements – simplicity, inventiveness, flexibility, and specificity – allows complex platterns to emerge. The case studies illustrate this framework and provide a number of examples of practical management of complexity in light of contingency theories of the evaluation process itself. These theories in turn match the complexity of the evaluated policies, strategies and programmes. The case studies do not pretend to illustrate perfect evaluation processes, the focus is on learning and on seeking patterns that have proved satisfactory and where the evaluation findings have been robust an trustworthy.”

“The contingency theory approach of the book underscores a point also made in the Foreword by Professor Elliot Stern: “In a world characterised by interdependence, emergent proerties, unpredictable change, and indeterminate outcomes, how could evaluation be immune?” The answer lies in the choice of methods as much as in the overall strategy and approach of  evaluation.”

Assessing the impact of blogs: Some evidence and analysis

See

The Impact of Economic Blogs – Part I: Dissemination by David McKenzie, Berk Özler, 2011-08-05.

  • Question 1: “Do blogs lead to increased dissemination of research papers?””
  • Answer:  “Blogging about a paper causes a large increase in the number of abstract views and downloads in the same month. These increases are massive compared to the typical abstract views and downloads these papers get. However, only a minority of readers click through the blog to the download.” [view paper by McKenzie for more details]

The Impact of Blogs Part II: Blogging enhances the blogger’s reputation. But, does it influence policy? by David McKenzie, Berk Özler, 2011-08-10

  • Question 2: Does blogging improve reputation?
  • Answer: “Regular blogging is strongly and significantly associated with being more likely to be viewed as a favorite economist.”
  • Question 3: Does blogging influence policy?
  • Answer1: This is where we haven’t been able to find much evidence to date [see blog for details of some case examples]
  • Answer2: In response to a case example provided by a reader: “my sense is that:
    i) very few posts actually influence policy
    ii) there are very few readers of blogs who are actually in a position to influence policy, but iii) it only takes one post read by the right reader to potentially make a big difference. This poses enormous problems for statistical inference, since these are likely rare events, but I think it is still useful to see whether there are in fact any plausible candidates.”

The Impact of Blogs Part III: Results from a new survey and an experiment! by David McKenzie, Berk Özler, COMING ON 2011-08-15

  • Including these headings: Survey evidence – why don’t you just ask blog readers?; The Experiment; Impacts on institutional reputation; Impacts on knowledge and attitudes.
  • The Summary:“Using a variety of data sources and empirical techniques, we feel we have provided quantitative evidence that economic blogs are doing more than just providing a new source of procrastination for writers and readers. To our knowledge, these findings are the first quantitative evidence to show that blogs are having some impacts. There are large impacts on dissemination of research; significant benefits in terms of the bloggers becoming better known and more respected within the profession; positive spillover effects for the bloggers’ institutions; and some evidence from our experiment that they may influence attitudes and knowledge among their readers. Blogs potentially have many impacts, and we are only measuring some of them, but the evidence we have suggests economics blogs are playing an important role in the profession.”

RD Comment: Two comments of note towards the end of the paper:

  • “…Table 6 shows that blog readership has not changed many of these attitudes towards methodology, with no significant experimental changes in the full sample. Amongst the subsamples, the most significant change occurs in the male sample, where there is an increase in the proportion that believe that it is difficult to succeed as a development economist on the job market without having a randomized experiment.”
  • “There is also some evidence among the research-focused subsample that more agree with the statement that external validity is no more of a concern in experiments than in most non-experimental studies (something discussed in David’s favorite rant).”
  • RD comment: This may be true, but experimental studies are often held up as being of more value than non-experimental studies. So the lack of difference is a problem, not a non-issue

 

DPC Policy Discussion Paper: Evaluating Influencing Strategies and Interventions

A paper to the DFID Development Policy Committee. Available as pdf  June 2011

Introduction
“1 The Strategy Unit brief of April 2008 envisaged that DFID should become more systematic in planning and implementing influencing efforts. Since then, procedures and guidance have been developed and there is an increasingly explicit use of influencing objectives in project log frames and more projectisation of influencing efforts. Evaluation studies and reports have illustrated the wide variety of DFID influencing efforts and the range of ambition and resources involved in trying to generate positive changes in the aid system or in partner countries. These suggest that being clear and realistic about DFID’s influencing objectives, the stakeholders involved and the specific changes being sought, is the fundamental requirement for an effective intervention. It is also the basis for sound monitoring and evaluation.
2 To support this initiative, the Evaluation Department organised a series of workshops in 2009 and 2010 to further develop the measurement and evaluation of influencing interventions producing a draft How to Note with reference to multilateral organisations in September 2010. However, with the changes to DFID’s corporate landscape in 2010 and early 2011 this work was put on hold pending the conclusion of some key corporate pieces of work .
3. An increase in demand for guidance is also noted given the changing external environment. DFID is now positioning itself to address the demands of the changing global aid landscape with new initiatives, such as the Global Development Partnerships programme. This has a relatively small spend, however its success will be measured largely by the depth and reach of its influence.
4. The Evaluation Department is now seeking guidance on how important the Development Policy Committee considers the evaluation of influencing interventions, and the direction in which it would like this developed.
5. This Paper sets out why evaluation of influencing interventions is important, why now, key theories of change and an influencing typology, value for money of an influencing intervention and metrics, and finally , the challenges of measuring influence.”

See also the associated “Proposed Influencing Typology”

The paper also refers to “Appraising, Measuring and Monitoring Influencing: How Can DFID Improve?” by the DFID Strategy Unit April 2008, which does not seem to be available on the web.

RD Comment: I understand that this is considered as a draft document and that comments on it would be welcomed. Please feel free to make your comments below

The Elusive Craft of Evaluating Advocacy

Original paper by Steven Teles, Department of Political Science, Johns Hopkins University, and Mark Schmitt, Roosevelt Institute. Published with support provided by The William and Flora Hewlett Foundation. Found courtesy of @alb202

A version of this paper was published in the Stanford Social Innovation Review  in May 2011 and is available as a pdf

“The political process is chaotic and often takes years to unfold, making it difficult to use traditional measures to evaluate the effectiveness of advocacy organizations. There are, however, unconventional methods one can use to evaluate advocacy organizations and make strategic investments in that arena”

Learning how to learn: eight lessons for impact evaluations that make a difference

ODI Background Notes, April 2011. Authors: Ben Ramalingam

This Background Note outlines key lessons on impact evaluations, utilisation-focused evaluations and evidence-based policy. While methodological pluralism is seen as the key to effective impact evaluation in development, the emphasis here is not methods per se. Instead, the focus is on the range of factors and issues that need to be considered for impact evaluations to be used in policy and practice – regardless of the method employed. This Note synthesises research by ODI, ALNAP, 3ie and others to outline eight key lessons for consideration by all of those with an interest in impact evaluation and aid effectiveness”.  8 pages

The 8 lessons:
Lesson 1:  Understand the key stakeholders
Lesson 2:  Adapt the incentives
Lesson 3:  Invest in capacities and skills
Lesson 4:  Define  impact  in ways  that  relate  to  the specific context
Lesson 5:  Develop the right blend of methodologies
Lesson 6:  Involve those who matter in the decisions that matter
Lesson 7:  Communicate effectively
Lesson 8:  Be persistent and lexible

See also Ben’s Thursday, April 14, 2011 blog posting: When will we learn how to learn?

[RD comments on this paper]

1.     The case for equal respect for different methodologies can be overstated. I feel this is the case when Ben argues that “First, it has been shown that the knowledge that results from any type of particular impact evaluation methodology is no more rigorous or widely applicable than the results from any other kind of methodology.”  While it is important that evaluation results affect subsequent policy and practice their adoption and use is not the only outcome measure for evaluations. We also want those evaluation results have some reliability and validity, that will stand the test of time and be generalisable to other settings with some confidence. An evaluation could affect policy and practice without necessarily being good quality , defined in terms of reliability and valdity.

  • Nevertheless, I like Ben’s caution about focusing too much on evaluations as outputs and the need to focus more on outcomes, the use and uptake of evaluations.

    2.     The section of Ben’s paper that most attracted my interest was the story about the Joint Evaluation of Emergency Assistance to Rwanda, and how the evaluation team managed to ensure it became “one of the most influential evaluations in the aid sector”. We need more case studies of these kinds of events and then a systematic review of those case studies.

    3.     When I read statements various like this: “As well as a supply of credible evidence, effort needs to be made to understand the demand for evidence” I have an image in my mind of evaluators as humble supplicants, at the doorsteps of the high and mighty. Isn’t it about time that evaluators turned around and started demanding that policy makers disclose the evidence base of their existing policies? As I am sure has been said by others before, when you look around there does not seem to be much evidence of evidence based policy making. Norms and expectations need to be built up, and then there may be more interest in what evaluations have to say. A more assertive and questioning posture is needed.

    Sound expectations: from impact evaluations to policy change

    3ie Working paper # 12, 2011, by Center for the Implementation of Public Policies Promoting Equity and Growth (CIPPEC) Emails: vweyrauch@cippec.org, gdiazlangou@cippec.org

    Abstract

    “This paper outlines a comprehensive and flexible analytical conceptual framework to be used in the production of a case study series. The cases are expected to identify factors that help or hinder rigorous impact evaluations (IEs) from influenc ing policy and improving policy effectiveness. This framework has been developed to be adaptable to the reality of developing countries. It is aimed as an analytical-methodological tool which should enable researchers in producing case studies which identify factors that affect and explain impact evaluations’ policy influence potential. The approach should also enable comparison between cases and regions to draw lessons that are relevant beyond the cases themselves.

    There are two different , though interconnected, issues that must be dealt with while discussing the policy influence of impact evaluations. The first issue has to do with the type of policy influence pursued and, aligned with this, the determination of the accomplishment (or not) of the intended influence. In this paper, we first introduce the discussion regarding the different types of policy influence objectives that impact evaluations usually pursue, which will ultimately help determine whether policy influence was indeed achieved. This discussion is mainly centered around whether an impact evaluation has had impact on policy. The second issue is related to the identification of the factors and forces that mediate the policy influence efforts and is focused on why the influence was achieved or not. We have identified and systematized the mediating factors and forces, and we approach them in this paper from the demand and supply perspective, considering as well, the intersection between these two.

    The paper concludes that, ultimately, the fulfillment of policy change based on the results of impact evaluations is determined by the interplay of the policy influenc e objectives with the factors that affect the supply and demand of research in the policymaking process.

    The paper is divided in four sections. A brief introduction is followed by an analysis of policy influence as an objective of research, specifically, impact evaluations. The third section identifies factors and forces that enhance or undermine influence in public policy decision making. The research ends up pointing out the importance of measuring policy influence and enumerates a series of challenges that have to be further assessed.”

    USAID Evaluation Policy

    14 pages. Available as pdf. Bureau for Policy, Planning, and Learning , January 19th, 2011

    Contents: 1. Context; 2. Purposes of Evaluation; 3. Basic Organizational Roles and Responsibilities; 4. Evaluation Practices; 5. Evaluation Requirements; 6. Conclusion. Annex: Criteria to Ensure the Quality of the Evaluation Report

    Five challenges facing impact evaluation

    PS 2018 02 23: The original NONIE Meeting 2001 website is no longer in existence. Use this reference, if needed: White, H. (2011) ‘Five challenges facing impact evaluation on NONIE’ (http://nonie2011.org/?q=content/post-2).

    “There has been enormous progress in impact evaluation of development interventions in the last five years. The 2006 CGD report When Will be Ever Learn? claimed that there was little rigorous evidence of what works in development. But there has been a huge surge in studies since then. By our count, there are over 800 completed and on-going impact evaluations of socio-economic development interventions in low and middle-income countries.

    But this increase in numbers is just the start of the process of ‘improving lives through impact evaluation’, which was the sub-title of the CGD report and has become 3ie’s vision statement. Here are five major challenges facing the impact evaluation community:

    1. Identify and strengthen processes to ensure that evidence is used in policy: studies are not an end in themselves, but a means to the end of better policy, programs and projects, and so better lives. At 3ie we are starting to document cases in which impact evaluations have, and have not, influenced policy to better understand how to go about this. DFID now requires evidence to be provided to justify providing support to new programs, an example which could be followed by other agencies.

    2. Institutionalize impact evaluation: the development community is very prone to faddism. Impact evaluation could go the way of other fads and fall into disfavour. We need to demonstrate the usefulness of impact evaluation to help prevent this happening , hence my first point. But we also need take steps to institutionalize the use of evidence in governments and development agencies. This step includes ensuring that ‘results’ are measured by impact, not outcome monitoring.

    3. Improve evaluation designs to answer policy-relevant questions: quality impact evaluations embed the counterfactual analysis of attribution in a broader analysis of the causal chain, allowing an understanding of why interventions work, or not, and yielding policy relevant messages for better design and implementation. There have been steps in this direction, but researchers need better understanding of the approach and to genuinely embrace mixed methods in a meaningful way.

    4. Make progress with small n impact evaluations: we all accept that we should be issues-led not methods led, and use the most appropriate method for the evaluation questions at hand. But the fact is that there is far more consensus for the evaluation of large n interventions, in which experimental and quasi-experimental approaches can be used, then there is about the approach to be used for small n interventions. If the call to base development spending on evidence of what works is to be heeded, then the development evaluation community needs to move to consensus on this point.

    5. Expand knowledge and use of systematic reviews: single impact studies will also be subject to criticisms of weak external validity. Systematic reviews, which draw together evidence from all quality impact studies of a particular intervention in a rigorous manner, give stronger, more reliable, messages. There has been an escalation in the production of systematic reviews in development in the last year. The challenge is to ensure that these studies are policy relevant and used by policy makers.”

    Learners, practitioners and teachers Handbook on monitoring, evaluating and managing knowledge for policy inluence

    Authors: Vanesa Weyrauch, Julia D´Agostino, Clara Richards
    Date Published: 11 February 2011 By CIPPEC. Available as pdf

    Description: The evidence based policy influence is a topic of growing interest to researchers, social organizations, experts, government officials, policy research institutes and universities. However, they all admit that the path from the production of a piece or body of research until a public policy is sinuous, fuzzy, forked. In this context, it is not surprising that the practice of monitoring and evaluation (M&E) of the policy influence in Latin America is limited. And, indeed, a limited development of knowledge management (KM) on the experiences of advocacy organizations in the region is also observed. Incorporate monitoring, evaluating, and managing of knowledge between the daily practices of policy research institutes is well worth it. On the one hand, the use of these tools can be a smart strategy to enhance the impact of their research in public policy. On the other hand, can help them strengthen their reputation and visibility attracting more and better support by donors. In turn, the design of a system of M&E and the beginning of a KM culture, if approached with a genuine interest in learning, can become a valuable knowledge that bridges motivation for members of the organization. In short, these practices can improve targeting activities, better decide where and how to invest resources, and formulate more realistic and accurate strategic plans. With the publication of this handbook CIPPEC aims to support organizations that can monitor and evaluate their interventions and to develop systematic strategies for knowledge management. It includes stories of previous experiences in these fields in the region of Latin America, reflections on the most common challenges and opportunities and concrete working tools. These contributions aim to pave the way for the influence of public policy research in the region.

    A guide to monitoring and evaluating policy influence

    ODI Background Notes, February 2011. 12 pages
    Authors: Harry Jones
    “This paper provides an overview of approaches to monitoring and evaluating policy influence and is intended as a guide, outlining challenges and approaches and suggested further reading.”

    “Summary: Influencing policy is a central part of much international development work. Donor agencies, for example, must engage in policy dialogue if they channel funds through budget support, to try to ensure that their money is well-spent. Civil society organisations are moving from service delivery to advocacy in order to secure more sustainable, widespread change. And there is an increasing recognition that researchers need to engage with policy-makers if their work is to have wider public value.

    Monitoring and evaluation (M&E), a central tool to manage interventions, improve practice and ensure accountability, is highly challenging in these contexts. Policy change is a highly complex process shaped by a multitude of interacting forces and actors. ‘Outright success’, in terms of achieving specific, hoped-for changes is rare, and the work that does influence policy is often unique and rarely repeated or replicated, with many incentives working against the sharing of ‘good practice’.

    This paper provides an overview of approaches to monitoring and evaluating policy influence, based on an exploratory review of the literature and selected interviews with expert informants, as well as ongoing discussions and advisory projects for policy-makers and practitioners who also face the challenges of monitoring and evaluation. There are a number of lessons that can be learned, and tools that can be used, that provide workable solutions to these challenges. While there is a vast breadth of activities that aim to influence policy, and a great deal of variety in theory and practice according to each different area or type of organisation, there are also some clear similarities and common lessons.

    Rather than providing a systematic review of practice, this paper is intended as a guide to the topic, outlining different challenges and approaches, with some suggestions for further reading.”

    %d bloggers like this: