Developing a Monitoring and Evaluation Framework: A list

[Aplogies: This page is still at the draft stage, there are some formating and other problems]

A suggested definition of a M&E Framework:

A document that tells you who is expected to know what, as well as when and how they are expected to know.

The list (under development, suggestions welcomed):

Evaluating the Complex: Attribution, Contribution and Beyond.

Kim Forss, Mita Marra and Robert Schwartz, editors. Transaction Publishers, New Brunswick. May 2011. Available via Amazon

“Problem-solving by policy initiative has come to stay. Overarching policy intiatives are now standard modus operandi for governmental and non-governmental organisations. But complex policy initiatives are not only reserved for the big challenges of our times, but are used for matters such as school achievement, regional development, urban planning, public health and safety. As policy and the ensuing implementation tends to be more complex than simple project and programme management, the task of  evaluation has also become more complex.”

“The book begins with a theoretical and conceptual explanation of complexity and how that affects evaluation. The authors make the distinction between, on the hand, the common-sense understanding of complexity  as something that is generally messy, involves many actors and has unclear boundaries and overlapping roles; and on the hand, complexity as a specific term from systems sciences, which implies non-linear relationships between phenomena. It is particularly in the latter sense that an understanding of complexity has a bearing on evaluation design in respect of how evaluators approach the question of impact.”

“The book presents nine case studies that cover a wide variety of policy initiatives, in public health (smoking prevention), homelessness, child labour, regional development, international development cooperation, the HIV/AIDs pandemic, and international development cooperation. The use of case studies sheds light on the conceptual ideas at work in organisations addressing some of the world’s largest and most varied problems.”

“The evaluation processes described here commonly seek a balance between order and chaos. The interaction of four elements – simplicity, inventiveness, flexibility, and specificity – allows complex platterns to emerge. The case studies illustrate this framework and provide a number of examples of practical management of complexity in light of contingency theories of the evaluation process itself. These theories in turn match the complexity of the evaluated policies, strategies and programmes. The case studies do not pretend to illustrate perfect evaluation processes, the focus is on learning and on seeking patterns that have proved satisfactory and where the evaluation findings have been robust an trustworthy.”

“The contingency theory approach of the book underscores a point also made in the Foreword by Professor Elliot Stern: “In a world characterised by interdependence, emergent proerties, unpredictable change, and indeterminate outcomes, how could evaluation be immune?” The answer lies in the choice of methods as much as in the overall strategy and approach of  evaluation.”

Can we obtain the required rigour without randomisation? Oxfam GB’s non-experimental Global Performance Framework

Karl Hughes, Claire Hutchings, August 2011. 3ie Working Paper 13. Available as pdf.

[found courtesy of @3ieNews]

Abstract

“Non-governmental organisations (NGOs) operating in the international development sector need credible, reliable feedback on whether their interventions are making a meaningful difference but they struggle with how they can practically access it. Impact evaluation is research and, like all credible research, it takes time, resources, and expertise to do well, and – despite being under increasing pressure – most NGOs are not set up to rigorously evaluate the bulk of their work. Moreover, many in the sector continue to believe that capturing and tracking data on impact/outcome indicators from only the intervention group is sufficient to understand and demonstrate impact. A number of NGOs have even turned to global outcome indicator tracking as a way of responding to the effectiveness challenge. Unfortunately, this strategy is doomed from the start, given that there are typically a myriad of factors that affect outcome level change. Oxfam GB, however, is pursuing an alternative way of operationalising global indicators. Closing and sufficiently mature projects are being randomly selected each year among six indicator categories and then evaluated, including the extent each has promoted change in relation to a particular global outcome indicator. The approach taken differs depending on the nature of the project. Community-based interventions, for instance, are being evaluated by comparing data collected from both intervention and comparison populations, coupled with the application of statistical methods to control for observable differences between them. A qualitative causal inference method known as process tracing, on the other hand, is being used to assess the effectiveness of the organisation’s advocacy and popular mobilisation interventions. However, recognising that such an approach may not be feasible for all organisations, in addition to Oxfam GB’s desire to pursue complementary strategies, this paper also sets out several other realistic options available to NGOs to step up their game in understanding and demonstrating their impact. These include: 1) partnering with research institutions to rigorously evaluate “strategic” interventions; 2) pursuing more evidence informed programming; 3) using what evaluation resources they do have more effectively; and 4) making modest investments in additional impact evaluation capacity.”

Evaluation and Assessment of Poverty and Conflict Interventions (EAPC)

[from the MercyCorps website]

“A significant body of knowledge exists on the relationship between poverty and conflict. Research has shown that low per capita income and slow economic growth drastically increase the chances that a country will experience violence. Driven in part by these findings, donors and their partners are implementing increasing numbers of economic development programs in conflict and post-conflict environments, based on the assumption that these will contribute to both poverty reduction and conflict management.”

“To test this assumption, Mercy Corps implemented the USAID-funded Evaluation and Assessment of Poverty and Conflict Interventions (EAPC) research project. Over the 18 month life of the project, Mercy Corps worked with its field teams in Ethiopia, Indonesia, and Uganda to 1) develop indicators and data collection tools, 2) field test these indicators and tools, and 3) begin to assess several theories of change that inform Mercy Corps’ programs.”

“Findings from the research project are shared in three key documents:
Conflict & Economics: Lessons Learned on Measuring Impact, a summary of learning about M&E in conflict-affected environments, including indicator menus and data collection tools.
A case study highlighting findings from Uganda.
A case study highlighting findings from Indonesia.

Please contact Jenny Vaughan at jvaughan@bos.mercycorps.org for further information.”

Systematic review: What is the evidence of the impact of microfinance on the well-being of poor people?

 

by Maren Duvendack, Richard Palmer-Jones, James G Copestake, Lee Hooper, Yoon Loke,  Nitya Rao, August 2011. Available as pdf

[found via@poverty _action]

Executive summary
Background

“The concept of microcredit was first introduced in Bangladesh by Nobel Peace Prize winner Muhammad Yunus. Professor Yunus started Grameen Bank (GB) more than 30 years ago with the aim of reducing poverty by providing small loans to the country’s rural poor (Yunus 1999). Microcredit has evolved over the years and does not only provide credit to the poor, but also now spans a myriad of other services including savings, insurance, remittances and non-financial services such as financial literacy  training and skills development programmes; microcredit is now referred to as microfinance (Armendáriz de Aghion and Morduch 2005, 2010). A key feature of microfinance has been the targeting of women on the grounds that, compared to men, they perform better as clients of microfinance institutions and that their participation has more desirable development outcomes (Pitt and Khandker 1998).”

“Despite the apparent success and popularity of microfinance, no clear evidence yet exists that microfinance programmes have positive impacts (Armendáriz de Aghion and Morduch 2005, 2010; and many others). There have been four major reviews examining impacts of microfinance (Sebstad and Chen, 1996; Gaile and Foster 1996, Goldberg 2005, Odell 2010, see also Orso 2011). These reviews concluded that, while anecdotes and other inspiring stories (such as Todd 1996) purported to show that microfinance can make a real difference in the lives of those served, rigorous quantitative evidence on the nature, magnitude and balance of microfinance impact is still scarce and inconclusive (Armendáriz de Aghion and Morduch 2005, 2010). Overall, it is widely acknowledged that no well-known study robustly shows any strong impacts of microfinance (Armendáriz de Aghion and Morduch 2005, p199-230).”

“Because of the growth of the microfinance industry and the attention the sector has received from policy makers, donors and private investors in recent years, existing microfinance impact evaluations need to be re-investigated; the robustness of claims that microfinance successfully alleviates poverty and empowers women must be scrutinised more carefully. Hence, this review revisits the evidence of microfinance evaluations focusing on the technical challenges of conducting rigorous microfinance impact evaluations.”

See also the blog commentary on this paper “Disproving and Confusing”by Jonathan Morduch, August 17, 2011

RD comment: After a quick scan I am not sure which has been dammed the most by this paper’s findings: the micro-finance industry or the evaluation business. :-(

24 Aug 2011: I just noticed this relevant quote from Chris Blattman in his 2008 presentation to DFID

Fast forward, if you will, to 2015, when there will be dozens upon dozens of education and health impact evaluations giving us average ROI figures for interventions from textbooks to scholarships to vocational training. It is extremely possible that in some contexts we will see a particular intervention yield 100 percent improvements, in some 30 or 40 or 50 percent improvements, and in some much less than that.

If so, we may find ourselves in an uncomfortable position: the average ROI of textbook provision may be statistically indistinguishable from the ROI of another program, like school meals, simply because of the variability of the impacts.

I fear the current ream of impact evaluations will yield one overwhelming result: how and where we implement is more important than what we implement. Performance is essentially conditional on context and processes. “

See online comments by others here:

The evaluation of peacebuilding: Four new reports

[found  courtesy of Find What Works and @poverty_action]

Dave Algoso comments: “The first two reports below resulted from a series of meetings held by the United States Institute of Peace and the Alliance for Peacebuilding; meeting participants came from a range of NGOs, government funders, private foundations, and other agencies. The second two reports deal more with methodological issues”

PS: See also

Related:

Comic: The problem with averaging star ratings

Courtesy of http://xkcd.com/937/

RD Comment: This problem is not unique to websites and smart phones. More than one donor agency uses “traffic lights” (red, amber, green ratings) to summarise complex performance information into a simple-enough-to-digest form for their hard pressed senior managers

Assessing the impact of blogs: Some evidence and analysis

See

The Impact of Economic Blogs – Part I: Dissemination by David McKenzie, Berk Özler, 2011-08-05.

  • Question 1: “Do blogs lead to increased dissemination of research papers?””
  • Answer:  “Blogging about a paper causes a large increase in the number of abstract views and downloads in the same month. These increases are massive compared to the typical abstract views and downloads these papers get. However, only a minority of readers click through the blog to the download.” [view paper by McKenzie for more details]

The Impact of Blogs Part II: Blogging enhances the blogger’s reputation. But, does it influence policy? by David McKenzie, Berk Özler, 2011-08-10

  • Question 2: Does blogging improve reputation?
  • Answer: “Regular blogging is strongly and significantly associated with being more likely to be viewed as a favorite economist.”
  • Question 3: Does blogging influence policy?
  • Answer1: This is where we haven’t been able to find much evidence to date [see blog for details of some case examples]
  • Answer2: In response to a case example provided by a reader: “my sense is that:
    i) very few posts actually influence policy
    ii) there are very few readers of blogs who are actually in a position to influence policy, but iii) it only takes one post read by the right reader to potentially make a big difference. This poses enormous problems for statistical inference, since these are likely rare events, but I think it is still useful to see whether there are in fact any plausible candidates.”

The Impact of Blogs Part III: Results from a new survey and an experiment! by David McKenzie, Berk Özler, COMING ON 2011-08-15

  • Including these headings: Survey evidence – why don’t you just ask blog readers?; The Experiment; Impacts on institutional reputation; Impacts on knowledge and attitudes.
  • The Summary:“Using a variety of data sources and empirical techniques, we feel we have provided quantitative evidence that economic blogs are doing more than just providing a new source of procrastination for writers and readers. To our knowledge, these findings are the first quantitative evidence to show that blogs are having some impacts. There are large impacts on dissemination of research; significant benefits in terms of the bloggers becoming better known and more respected within the profession; positive spillover effects for the bloggers’ institutions; and some evidence from our experiment that they may influence attitudes and knowledge among their readers. Blogs potentially have many impacts, and we are only measuring some of them, but the evidence we have suggests economics blogs are playing an important role in the profession.”

RD Comment: Two comments of note towards the end of the paper:

  • “…Table 6 shows that blog readership has not changed many of these attitudes towards methodology, with no significant experimental changes in the full sample. Amongst the subsamples, the most significant change occurs in the male sample, where there is an increase in the proportion that believe that it is difficult to succeed as a development economist on the job market without having a randomized experiment.”
  • “There is also some evidence among the research-focused subsample that more agree with the statement that external validity is no more of a concern in experiments than in most non-experimental studies (something discussed in David’s favorite rant).”
  • RD comment: This may be true, but experimental studies are often held up as being of more value than non-experimental studies. So the lack of difference is a problem, not a non-issue

 

Measuring Impact: Lessons from the MCC for the Broader Impact Evaluation Community

William Savedoff and Christina Droggitis, Centre for Global Development, Aug 2011. Available as pdf (2 pages)

Excerpt:

“One organization that has taken the need for impact evaluation seriously is the Millennium Challenge Corporation. The first of the MCC programs came to a close this fiscal year, and in the next year the impact evaluations associated with them will begin to be published.

Politicians’ responses to the new wave of evaluations will set a precedent, either one that values transparency and encourages aid agencies to be public about what they are learning or one that punishes transparency and encourages agencies to hide findings or simply cease commissioning evaluations.”

The Canadian M&E System: Lessons Learned from 30 Years of Development

by Robert Lahey, November 2010, ECD Working Paper Series, Independent Evaluation Group, World Bank. Available as pdf

Foreword

As part of its activities, the World Bank Group’s Independent Evaluation Group (IEG) provides technical assistance to member developing countries for designing and implementing effective monitoring and evaluation (M&E) systems and for strengthening government evaluation capacities as an important part of sound governance. IEG prepares resource materials, with case studies demonstrating good or promising practices, which other countries can refer to or adapt to suit their own particular circumstances (http://www.worldbank.org/ieg/ecd).

World Bank support to strengthen M&E systems in different countries has grown substantially in the past decade. There is intense activity on M&E issues in most regions, and IEG has provided support to governments and World Bank units, particularly since 1997, on ways to further strengthen M&E systems, with the objective of fully institutionalizing countries’ efforts.

While several World Bank assessments have been done on the strengths and weaknesses of developing countries’ M&E systems, fewer analyses have looked at OECD country experiences with a view to help identify and document approaches, methods, and “good practices,” and to promote knowledge sharing of those cases as key references for developing country systems in the process of design and implementation.

This Evaluation Capacity Development (ECD) paper seeks to provide an overview of the Canadian model for monitoring and evaluation developed over the past three decades. The Canadian M&E system is one that has invested heavily in both evaluation and performance monitoring as key tools to support accountability and results-based management in government.

The paper tracks the evolution of Canada’s M&E system to its current state, identifying key lessons learned from public sector experience. It offers insights from officials’ own perspectives, highlights key initiatives introduced to help drive the M&E system, and discusses the demands for public sector reforms and the emphasis they have placed on M&E in public sector management.

It is hoped that the lessons and practices identified here will benefit officials undertaking similar tasks in other countries.

This paper was peer reviewed by Anne Routhier, head of the Center of Excellence for Evaluation (CEE) at the Treasury Board Secretary of Canada; Keith Mackay and Manuel Fernando Castro, M&E experts and former World Bank Senior Evaluation Officers; and Nidhi Khattri and Ximena Fernandez Ordonez, IEG. The paper was edited for publication by Helen Chin, IEG. Their comments and feedback are gratefully acknowledged. The views expressed in this document are solely those of the author, and do not necessarily represent the views of the World Bank or of the government of Canada.

%d bloggers like this: