DFID&UKES Workshop on Development and Evaluation: Practical Ways Forward.

 

Date:  WEDNESDAY 12 OCTOBER 2011
Venue: BIS Conference Centre, Victor ia, London

Objectives:

  • To examine the key contributions of evaluation to international development
  • To provide an update on the accountability framework for evaluation in the UK
  • To explore the role of professional development in building evaluation capacity

THIS ONE DAY EVENT will raise important issues in the world of development and evaluation. The workshop will offer the chance to hear from senior practitioners and will cover the theory and reality as experienced in many contexts. It will update the accountability framework with particular reference to HM Treasury Guidance for Evaluation (the Magenta Book).

A major challenge for organisations is to develop their own staff as evaluation professionals. UKES will offer international insights as well as an update on its own guidance. DFID will report on how it is going about building its own community of evaluators. These will be presented alongside those from the NGO and voluntary sector. The day is relevant to all individuals and organisations with an interest and experience of development and evaluation, including: Donors, Consultants, Public and private sector representatives, Academics, A wide range of professionals

Programme
The workshop will commence at 09.00 and close at 17.30.
Highlights will include:

  • Updates on the Independent Commission for Aid Impact (ICAI),  HM Treasury’s Magenta Book and the Cross Government Evaluation Group (CGEG)
  • How to evaluate in fragile states, conlict environments and other challenging situations
  •  Case studies of evaluation at different levels: national and local,  sector specific
  • How to build professional capacity: use of accreditation and adapting to it a range of organisations at government and civil society level

Registration
The workshop will be held at the BIS Conference Centre, 1 Victoria, Street, London SW1H OET.
The registration fees are as follows:
UKES members  £75.00 + VAT
Non-members  £100.00 + VAT
Registration and the full programme for the workshop are available from the website  www.profbriefings.co.uk/depwf
For any further information, contact the workshop administrators:
Professional Brieings
37 Star Street
Ware
Hertfordshire SG12 7AA
Telephone:
01920 487672
Email:  london@profbrieings.co.uk

Can we obtain the required rigour without randomisation? Oxfam GB’s non-experimental Global Performance Framework

Karl Hughes, Claire Hutchings, August 2011. 3ie Working Paper 13. Available as pdf.

[found courtesy of @3ieNews]

Abstract

“Non-governmental organisations (NGOs) operating in the international development sector need credible, reliable feedback on whether their interventions are making a meaningful difference but they struggle with how they can practically access it. Impact evaluation is research and, like all credible research, it takes time, resources, and expertise to do well, and – despite being under increasing pressure – most NGOs are not set up to rigorously evaluate the bulk of their work. Moreover, many in the sector continue to believe that capturing and tracking data on impact/outcome indicators from only the intervention group is sufficient to understand and demonstrate impact. A number of NGOs have even turned to global outcome indicator tracking as a way of responding to the effectiveness challenge. Unfortunately, this strategy is doomed from the start, given that there are typically a myriad of factors that affect outcome level change. Oxfam GB, however, is pursuing an alternative way of operationalising global indicators. Closing and sufficiently mature projects are being randomly selected each year among six indicator categories and then evaluated, including the extent each has promoted change in relation to a particular global outcome indicator. The approach taken differs depending on the nature of the project. Community-based interventions, for instance, are being evaluated by comparing data collected from both intervention and comparison populations, coupled with the application of statistical methods to control for observable differences between them. A qualitative causal inference method known as process tracing, on the other hand, is being used to assess the effectiveness of the organisation’s advocacy and popular mobilisation interventions. However, recognising that such an approach may not be feasible for all organisations, in addition to Oxfam GB’s desire to pursue complementary strategies, this paper also sets out several other realistic options available to NGOs to step up their game in understanding and demonstrating their impact. These include: 1) partnering with research institutions to rigorously evaluate “strategic” interventions; 2) pursuing more evidence informed programming; 3) using what evaluation resources they do have more effectively; and 4) making modest investments in additional impact evaluation capacity.”

Evaluation and Assessment of Poverty and Conflict Interventions (EAPC)

[from the MercyCorps website]

“A significant body of knowledge exists on the relationship between poverty and conflict. Research has shown that low per capita income and slow economic growth drastically increase the chances that a country will experience violence. Driven in part by these findings, donors and their partners are implementing increasing numbers of economic development programs in conflict and post-conflict environments, based on the assumption that these will contribute to both poverty reduction and conflict management.”

“To test this assumption, Mercy Corps implemented the USAID-funded Evaluation and Assessment of Poverty and Conflict Interventions (EAPC) research project. Over the 18 month life of the project, Mercy Corps worked with its field teams in Ethiopia, Indonesia, and Uganda to 1) develop indicators and data collection tools, 2) field test these indicators and tools, and 3) begin to assess several theories of change that inform Mercy Corps’ programs.”

“Findings from the research project are shared in three key documents:
Conflict & Economics: Lessons Learned on Measuring Impact, a summary of learning about M&E in conflict-affected environments, including indicator menus and data collection tools.
A case study highlighting findings from Uganda.
A case study highlighting findings from Indonesia.

Please contact Jenny Vaughan at jvaughan@bos.mercycorps.org for further information.”

Systematic review: What is the evidence of the impact of microfinance on the well-being of poor people?

 

by Maren Duvendack, Richard Palmer-Jones, James G Copestake, Lee Hooper, Yoon Loke,  Nitya Rao, August 2011. Available as pdf

[found via@poverty _action]

Executive summary
Background

“The concept of microcredit was first introduced in Bangladesh by Nobel Peace Prize winner Muhammad Yunus. Professor Yunus started Grameen Bank (GB) more than 30 years ago with the aim of reducing poverty by providing small loans to the country’s rural poor (Yunus 1999). Microcredit has evolved over the years and does not only provide credit to the poor, but also now spans a myriad of other services including savings, insurance, remittances and non-financial services such as financial literacy  training and skills development programmes; microcredit is now referred to as microfinance (Armendáriz de Aghion and Morduch 2005, 2010). A key feature of microfinance has been the targeting of women on the grounds that, compared to men, they perform better as clients of microfinance institutions and that their participation has more desirable development outcomes (Pitt and Khandker 1998).”

“Despite the apparent success and popularity of microfinance, no clear evidence yet exists that microfinance programmes have positive impacts (Armendáriz de Aghion and Morduch 2005, 2010; and many others). There have been four major reviews examining impacts of microfinance (Sebstad and Chen, 1996; Gaile and Foster 1996, Goldberg 2005, Odell 2010, see also Orso 2011). These reviews concluded that, while anecdotes and other inspiring stories (such as Todd 1996) purported to show that microfinance can make a real difference in the lives of those served, rigorous quantitative evidence on the nature, magnitude and balance of microfinance impact is still scarce and inconclusive (Armendáriz de Aghion and Morduch 2005, 2010). Overall, it is widely acknowledged that no well-known study robustly shows any strong impacts of microfinance (Armendáriz de Aghion and Morduch 2005, p199-230).”

“Because of the growth of the microfinance industry and the attention the sector has received from policy makers, donors and private investors in recent years, existing microfinance impact evaluations need to be re-investigated; the robustness of claims that microfinance successfully alleviates poverty and empowers women must be scrutinised more carefully. Hence, this review revisits the evidence of microfinance evaluations focusing on the technical challenges of conducting rigorous microfinance impact evaluations.”

See also the blog commentary on this paper “Disproving and Confusing”by Jonathan Morduch, August 17, 2011

RD comment: After a quick scan I am not sure which has been dammed the most by this paper’s findings: the micro-finance industry or the evaluation business. :-(

24 Aug 2011: I just noticed this relevant quote from Chris Blattman in his 2008 presentation to DFID

Fast forward, if you will, to 2015, when there will be dozens upon dozens of education and health impact evaluations giving us average ROI figures for interventions from textbooks to scholarships to vocational training. It is extremely possible that in some contexts we will see a particular intervention yield 100 percent improvements, in some 30 or 40 or 50 percent improvements, and in some much less than that.

If so, we may find ourselves in an uncomfortable position: the average ROI of textbook provision may be statistically indistinguishable from the ROI of another program, like school meals, simply because of the variability of the impacts.

I fear the current ream of impact evaluations will yield one overwhelming result: how and where we implement is more important than what we implement. Performance is essentially conditional on context and processes. “

See online comments by others here:

The evaluation of peacebuilding: Four new reports

[found  courtesy of Find What Works and @poverty_action]

Dave Algoso comments: “The first two reports below resulted from a series of meetings held by the United States Institute of Peace and the Alliance for Peacebuilding; meeting participants came from a range of NGOs, government funders, private foundations, and other agencies. The second two reports deal more with methodological issues”

PS: See also

Related:

Comic: The problem with averaging star ratings

Courtesy of http://xkcd.com/937/

RD Comment: This problem is not unique to websites and smart phones. More than one donor agency uses “traffic lights” (red, amber, green ratings) to summarise complex performance information into a simple-enough-to-digest form for their hard pressed senior managers

Assessing the impact of blogs: Some evidence and analysis

See

The Impact of Economic Blogs – Part I: Dissemination by David McKenzie, Berk Özler, 2011-08-05.

  • Question 1: “Do blogs lead to increased dissemination of research papers?””
  • Answer:  “Blogging about a paper causes a large increase in the number of abstract views and downloads in the same month. These increases are massive compared to the typical abstract views and downloads these papers get. However, only a minority of readers click through the blog to the download.” [view paper by McKenzie for more details]

The Impact of Blogs Part II: Blogging enhances the blogger’s reputation. But, does it influence policy? by David McKenzie, Berk Özler, 2011-08-10

  • Question 2: Does blogging improve reputation?
  • Answer: “Regular blogging is strongly and significantly associated with being more likely to be viewed as a favorite economist.”
  • Question 3: Does blogging influence policy?
  • Answer1: This is where we haven’t been able to find much evidence to date [see blog for details of some case examples]
  • Answer2: In response to a case example provided by a reader: “my sense is that:
    i) very few posts actually influence policy
    ii) there are very few readers of blogs who are actually in a position to influence policy, but iii) it only takes one post read by the right reader to potentially make a big difference. This poses enormous problems for statistical inference, since these are likely rare events, but I think it is still useful to see whether there are in fact any plausible candidates.”

The Impact of Blogs Part III: Results from a new survey and an experiment! by David McKenzie, Berk Özler, COMING ON 2011-08-15

  • Including these headings: Survey evidence – why don’t you just ask blog readers?; The Experiment; Impacts on institutional reputation; Impacts on knowledge and attitudes.
  • The Summary:“Using a variety of data sources and empirical techniques, we feel we have provided quantitative evidence that economic blogs are doing more than just providing a new source of procrastination for writers and readers. To our knowledge, these findings are the first quantitative evidence to show that blogs are having some impacts. There are large impacts on dissemination of research; significant benefits in terms of the bloggers becoming better known and more respected within the profession; positive spillover effects for the bloggers’ institutions; and some evidence from our experiment that they may influence attitudes and knowledge among their readers. Blogs potentially have many impacts, and we are only measuring some of them, but the evidence we have suggests economics blogs are playing an important role in the profession.”

RD Comment: Two comments of note towards the end of the paper:

  • “…Table 6 shows that blog readership has not changed many of these attitudes towards methodology, with no significant experimental changes in the full sample. Amongst the subsamples, the most significant change occurs in the male sample, where there is an increase in the proportion that believe that it is difficult to succeed as a development economist on the job market without having a randomized experiment.”
  • “There is also some evidence among the research-focused subsample that more agree with the statement that external validity is no more of a concern in experiments than in most non-experimental studies (something discussed in David’s favorite rant).”
  • RD comment: This may be true, but experimental studies are often held up as being of more value than non-experimental studies. So the lack of difference is a problem, not a non-issue

 

Measuring Impact: Lessons from the MCC for the Broader Impact Evaluation Community

William Savedoff and Christina Droggitis, Centre for Global Development, Aug 2011. Available as pdf (2 pages)

Excerpt:

“One organization that has taken the need for impact evaluation seriously is the Millennium Challenge Corporation. The first of the MCC programs came to a close this fiscal year, and in the next year the impact evaluations associated with them will begin to be published.

Politicians’ responses to the new wave of evaluations will set a precedent, either one that values transparency and encourages aid agencies to be public about what they are learning or one that punishes transparency and encourages agencies to hide findings or simply cease commissioning evaluations.”

The Canadian M&E System: Lessons Learned from 30 Years of Development

by Robert Lahey, November 2010, ECD Working Paper Series, Independent Evaluation Group, World Bank. Available as pdf

Foreword

As part of its activities, the World Bank Group’s Independent Evaluation Group (IEG) provides technical assistance to member developing countries for designing and implementing effective monitoring and evaluation (M&E) systems and for strengthening government evaluation capacities as an important part of sound governance. IEG prepares resource materials, with case studies demonstrating good or promising practices, which other countries can refer to or adapt to suit their own particular circumstances (http://www.worldbank.org/ieg/ecd).

World Bank support to strengthen M&E systems in different countries has grown substantially in the past decade. There is intense activity on M&E issues in most regions, and IEG has provided support to governments and World Bank units, particularly since 1997, on ways to further strengthen M&E systems, with the objective of fully institutionalizing countries’ efforts.

While several World Bank assessments have been done on the strengths and weaknesses of developing countries’ M&E systems, fewer analyses have looked at OECD country experiences with a view to help identify and document approaches, methods, and “good practices,” and to promote knowledge sharing of those cases as key references for developing country systems in the process of design and implementation.

This Evaluation Capacity Development (ECD) paper seeks to provide an overview of the Canadian model for monitoring and evaluation developed over the past three decades. The Canadian M&E system is one that has invested heavily in both evaluation and performance monitoring as key tools to support accountability and results-based management in government.

The paper tracks the evolution of Canada’s M&E system to its current state, identifying key lessons learned from public sector experience. It offers insights from officials’ own perspectives, highlights key initiatives introduced to help drive the M&E system, and discusses the demands for public sector reforms and the emphasis they have placed on M&E in public sector management.

It is hoped that the lessons and practices identified here will benefit officials undertaking similar tasks in other countries.

This paper was peer reviewed by Anne Routhier, head of the Center of Excellence for Evaluation (CEE) at the Treasury Board Secretary of Canada; Keith Mackay and Manuel Fernando Castro, M&E experts and former World Bank Senior Evaluation Officers; and Nidhi Khattri and Ximena Fernandez Ordonez, IEG. The paper was edited for publication by Helen Chin, IEG. Their comments and feedback are gratefully acknowledged. The views expressed in this document are solely those of the author, and do not necessarily represent the views of the World Bank or of the government of Canada.

Australasian Evaluation Society 2011 International Conference: Evaluation and Influence

 

Date: 29 August – 2 September (workshops on 29-30th)
Venue: Hilton, Sydney, NSW, Australia

View the Conference website here

View the detailed outline of the program and please click here to view the detailed Pre Conference Workshop Program. Please note the Conference Program is subject to change.

Read more about the Keynote Speakers, their presentations and Conference main streams and ‘hot topics’.

Evaluation and influence

Evaluation claims to influence public policy, professional practice and the management of organisations. What is the nature and extent of this influence? How can evaluations be made more influential?  And conversely, in a rapidly changing world, what are the main influences on evaluation? To what extent is evaluation responding by taking on new approaches and technologies?

With the focus on influence, the conference builds upon the two previous conferences with their themes of evidence (Canberra 2009) and reflections on evaluation (Wellington 2010).

The conference will focus on three sub-themes:

The influence of evaluation on society

How much and in what ways does evaluation impact upon policy, practice and organisations? Where and in what circumstances does it have the most impact, and why? What are other important sources of influence, and how do they compare with evaluation?

Making an evaluation more influential

How can an evaluation be designed and conducted to increase its use and influence?  What are the most persuasive ways of communicating the results of an evaluation?What role can evaluators play in implementing evaluation results? What are the lessons for evaluation from theories of influence and diffusion?

Influences shaping evaluation

How is evaluation changing in response to emerging social, economic and political issues, to increasing complexity and uncertainty, and to new approaches and technologies? What are the important influences on evaluation, and how are they shaping evaluation?

The conference can explore its theme in streams around fields such as education and research, health, human services, justice, international development, Indigenous peoples, natural resource management and the economy. We also expect a stream on design and methodology.

%d bloggers like this: