GRDC Helpdesk Research Report: Monitoring and Evaluation of Participation in Governance

M&E of Participation in Governance: Please identify toolkits, methodologies and indicators for monitoring and evaluating the effectiveness of programmes aimed at improving governance (particularly of urban infrastructure/services). Please highlight methods of relevance to NGOs for monitoring and evaluating poor people’s participation in decision-making processes.

Helpdesk response
Key findings: There is generally very little information available on evaluating the effectiveness of the inclusive/participatory aspects of governance programmes. A particular difficulty is that there is a limited understanding of what improvements in governance actually look like. Nevertheless, some common principles identified in the literature include the need for both quantitative and qualitative indicators and the importance of focusing on purpose, processes, context and perception as well as outputs and outcomes.

Some common indicators for assessing the effectiveness of participatory programmes include:

  • the level of participation of different types of stakeholders
  • institutional arrangements to facilitate engagement
  • active engagement of stakeholders in the programme, and confidence and willingness to get involved in future
  • the extent to which participants are mobilising their own resources
  • transparent access to and use of resources
  • equality of access to decision-making
  • transformation of power through e.g. new relationships and access to new networks
  • level of trust and ownership of the process behavioural changes of stakeholders (values, priorities, aims)
  • level of self-reliance, self-management, capacity and understanding of the issues sustainability and ability to resolve conflict.

Full response: http://www.gsdrc.org/docs/open/HD549.pdf

Produced by the Governance and Social Development Resource Centre

Online Training: Introduction to Social Audit

Date: Tuesday 28th October 2008 2.00pm to 4.00pm
Venue: Online

This two hour Social Audit introduction will include:

  • Why Social Audit
  • What type of organisation uses Social Audit
  • What are the benefits of Social Audit
  • The 4 stages of Social Audit
  • Social Audit Governance
  • Social Audit External View
  • Social Audit Internal View
  • Social Planning and Accounting
  • Presenting the Social Audit

This course is particularly suited to groups and individuals engaged in social enterprises, NGOs, development agencies and social economy organisations.

Freer Spreckley
Email: f.spreckley@locallivelihoods.com
Website http://www.locallivelihoods.com
Organisation Local Livelihoods

Michael Scriven teaches “An Introduction to Evaluation” online

Date: Oct. 11-12 and Nov. 1-2, 2008
Venue: Claremont Graduate University, California

Four Days of Evaluation Training with a Master in the Field: Michael Scriven teaches “An Introduction to Evaluation”

Dr. Michael Scriven, one of the founders of Evaluation Science as a modern discipline, will be offering two 2-day courses on the foundations, power, and practice of evaluation. These courses will be taught in a live virtual classroom environment, and are open to the public. Thanks to the online nature of the course, these are available to anyone anywhere on the globe.

Save Your Seat Today: Total cost: $350 for four sessions

Full course descriptions are available on our website. Don’t miss your chance to learn firsthand from this giant in the field!

Meeting Schedule: Oct/11/08 and Oct/12/08 (Part I)  Nov/1/08 and Nov/2/08 (Part II) 10 am to 1 pm PST

Technology Requirements: Computer microphone and speakers required. (Webcam not required, but can be a plus.)

Register Online Today! http://www.cgu.edu/pages/5164.asp

India Swearingen
SBOS Outreach Assistant
School of Behavioral and Organizational Sciences
Claremont Graduate University
outreach@cgu.edu

Upcoming training opportunities with Jess Dart of Clear Horizon

Date: 13-17th October 2008
Venue: Melbourne, Australia
Upcoming training opportunities with Jess Dart of Clear Horizon.

Build a People-Centred Monitoring, Evaluation, Reporting & Improvement (MERI) Framework AU$800
Two days: 13th & 14th October

Course Description: Increasingly, project teams are involved in designing monitoring, evaluation, frameworks. Having a strong program logic element, this people-centred approach uses targeted stakeholders as the organising construct to clarify project design and then develop an effective and meaningful monitoring and evaluation framework. This workshop follows a series of steps that provide a pathway for planning that takes into account clarification of program goals, articulation of how we expect these to be achieved and therefore how best to collect evidence on whether we are on track. This includes consideration of evaluation purpose, key evaluation questions, and appropriate methods for collecting information.

Most Significant Change (MSC) Technique AU$800
Two day workshop: 15th & 16th October

Course Description: MSC is a powerful tool for monitoring, evaluation and organisational learning. MSC goes beyond merely capturing and documenting participants’ stories of impact, to offering a means of engaging in effective dialogue. Each story represents the storyteller’s interpretation of impact, which is then reviewed and discussed. The process offers an opportunity for a diverse range of stakeholders to enter into a dialogue about program intention, impact and ultimately future direction. This two day workshop provides an introduction to MSC which includes designing your own MSC process.

Evaluator as Facilitator AU$500
One day workshop: 17th October

Course Description: This workshop explores the role of facilitator as evaluator when undertaking participatory evaluations. The course includes a reflection on the two roles and explores their overlap when conducting participatory evaluation processes. In particular we share our experiences and explain how to facilitate some key participatory evaluation methods such as program logic, Most Significant Change (MSC) technique and the evaluation summit. We discuss the underlying philosophies behind approaches to evaluation and facilitation and explore practical applications in a range of contexts.

AusAID first: Annual Review of Development Effectiveness 2007

The inaugural Annual Review of Development Effectiveness, produced by the Office of Development Effectiveness, was tabled in Parliament on 20 March 2008.

The review is a key element in efforts to strengthen the effectiveness of the aid program as the aid budget is scaled up to reach 0.5% of Gross National Income by 2015. The review provides an annual health check of the program and identifies areas where effectiveness could be strengthened.

The review found that Australia manages its aid activities well and is achieving good results. More than three quarters of activities will meet their objectives in 2006-07, these objectives range from better budgeting to stronger service delivery. Continue reading “AusAID first: Annual Review of Development Effectiveness 2007”

International Aid Transparency Initiative

[From the Development Gateway Foundation]  In support of the Accra Agenda for Action, 14 donors committed to increasing transparency. Participants from developed countries were joined by heads of multilateral and bilateral funding institutions and representatives of foundations in agreement to make information on development more accessible. They promised to establish a common format for the publication of information on aid by 2010. The signatories to the International Aid Transparency Initiative are Australia, Denmark, Finland, Germany, Ireland, Netherlands, Norway, Sweden, United Kingdom, the European Commission, the World Bank, the United Nations Development Program, the GAVI Alliance, and Hewlett Foundation.

See the International Aid Transparency Initiative Accra Statement

Continue reading “International Aid Transparency Initiative”

Monitoring and Evaluation for Results – Vienna, Austria

URL: www.worldbank.org/wbi/evaluation/training/vienna2008
Date: December 2008

Join us in December 2008 in Vienna for a one-week course covering the fundamental issues of monitoring and evaluation (M&E) for programs and projects. The course is organized by the World Bank Institute Evaluation Group – with experience teaching over 4,000 participants across 60 countries in the last decade – and taught by senior evaluation experts. Combining both attention to M&E theory and practical applications, the 30 participants completing the course will gain knowledge and experience on M&E tools, techniques, and resources needed for M&E planning, organizing, and/or managing programs and projects.

1st offering: December 8 (Mon) – 12 (Fri), 2008 (available in English or Russian)
2nd offering: December 15 (Mon) – 19 (Fri), 2008 (available in English only)

[Video conference] From Rhetoric to Action: E-Government and Aid Effectiveness

A videoconferenced workshop between Washington DC, Berlin and Paris
Date: 17th September 2008

Program Description

Increasing the impact of development aid is the core objective of the Paris Declaration, a document endorsed by more than 100 developing and donor countries and multilateral agencies in 2005. The 2008 “Evaluation of the implementation of the Paris Declaration” calls for faster progress from rhetoric to action by both partner governments and donors. In this context, this workshop organized jointly by GTZ and GICT/e-Development Thematic Group explores the connection between e-Government and aid effectiveness. The workshop will try to address two questions using inputs of practitioners from partner country governments and development organisations:

  • How can e-Government contribute to aid effectiveness?
  • How does the concern for aid effectiveness inform the way we invest in e-government?

Continue reading “[Video conference] From Rhetoric to Action: E-Government and Aid Effectiveness”

Survey on Evaluation of US Government Foreign Assistance

From: “ccwincek”
To: MandENEWS@yahoogroups.com
Date: Mon, 08 Sep 2008 21:23:45 -0000
Subject: Survey on Evaluation of US Government Foreign Assistance
Dear Fellow Evaluators (and colleagues interested in evaluation),

In 2001, we completed a study for USAID (CDIE) called the Evaluation of USAID’s Evaluation Experience. With all of the recent discussion of Foreign Assistance Reform, we thought that this would be an ideal time to update the study and broaden beyond USAID to all US Government funded foreign assistance. Many of the analyses, recommendations, and reports on U.S. foreign assistance reform emphasize how important it will be to improve monitoring and evaluation efforts. None of them say how. This is the question we seek to explore with your input.

We have developed an anonymous, online survey and would be very appreciative if you would complete it; the link is below. You can complete Sections 1-3 of the questionnaire which are quick response questions; and/or Section 4 which has open-ended questions, if this is your preference.

This is the link:
http://www.surveymonkey.com/s.aspx?sm=CXzl4PwpWqU7TTE704ZSaQ_3d_3d

This link will be open until September 22, 2008.
Continue reading “Survey on Evaluation of US Government Foreign Assistance”

Aristotle and Plato at it again? Philosophical divergence within international aid project design and monitoring & evaluation

Available from the AID-IT website

Abstract: “International aid projects are broadly concerned with fostering change. Frequently, the ‘theory of change’ within an aid project is communicated using Logical Framework Analysis, or the ‘logframe’. The logframe may be viewed from at least two philosophical perspectives-functionalist and interpretist. Functionalism is found to be useful for problem analysis and project design since it enables a deconstruction of the goal into functional components. Interpretivism is found to assist project monitoring and evaluation since it draws attention to the role of human actors within the social change process, thereby clarifying the social research plan. A bilateral aid program in the Philippines is described to illustrate the practical differences arising from the divergent philosophies.”

Editor’s comments: This paper argues for the use of more actor-oriented versions of the Logical Framework to make it easier to identify who should monitor and evaluate what in a development project. There is a substantial overlap in the arguments presented here and those I have presented in my paper on this website on the need for a Social Framework (“a Logical Framework re-designed as if people and their relationships mattered”). Continue reading “Aristotle and Plato at it again? Philosophical divergence within international aid project design and monitoring & evaluation”