Policy Practice Brief 6 – What Makes A Good Governance Indicator?

January 2011, Gareth Williams

http://www.thepolicypractice.com/papersdetails.asp?code=17

The rise to prominence of good governance as a key development concern has been marked by an increasing interest in measurement and the production of a huge range of governance indicators. When used carefully such indicators provide a valuable source of information on governance conditions and trends. However, when used carelessly they can misinform and mislead. The purpose of this brief is to make sense of the different types of governance indicator and how they are used and misused. It warns against the commission of ‘seven deadly sins’ representing the most common pitfalls. The paper puts forward guidelines to ensure a more careful use and interpretation of governance indicators, and highlights the need for providers of indicators to be subject to greater transparency, scrutiny, evaluation and peer review. From the perspective of political economy analysis the challenge is to make the indicators more relevant to understanding the underlying political processes that are the key drivers of better governance.

‘Realist evaluation – understanding how programs work in their context’; An expert seminar with Dr. Gill Westhorp; Wageningen, the Netherlands; 29-03-2011

FormDate: 29-03-2011
Venue: Wageningen, the Netherlands

Dear colleague,

With pleasure we would like to announce an expert seminar with Dr. Gill Westhorp on 29th March 2011: ‘Realist evaluation – understanding how programs work in their context’.

‘Realist evaluation (Pawson and Tilley, 1997) is one type of theory based evaluation.  It aims to explore “what works, for whom, in what contexts, to what extent and how”.  It adopts a particular understanding of how programs work, and uses a particular format for program theories to help guide evaluation design, data collection and analysis.

Realist evaluation has a particular focus on understanding the interactions between programs and their contexts and the ways that these influence how programs work. Evaluation expert Dr. Gill Westhorp will discuss the concepts and assumptions that underpin this theory based evaluation approach. What is it that realist evaluation brings to the table of evaluating development programs? How is it different from existing approaches in evaluation in development? How does it understand, and deal with, complexity? What new insights can help strengthen the utility of evaluation for development?

During the morning, Gill will introduce the basic assumptions and key concepts in realist evaluation.  She will also briefly demonstrate how these ideas can be built into other evaluation models using two examples.  These models – realist action research and realist program logic – are participatory models which were designed for use in settings where limited resources, lack of capacity to collect outcomes data, complex programs, and (sometimes) small participant numbers make evaluation difficult.  In the afternoon, the practical implications for evaluation design, data collection and analysis will be discussed. Examples and practical exercises will be included throughout the day.

For those interested and not to far away around that time, please do come and join this interesting event!

Please find attached the Factsheet flyer and Registration Form form. We also suggest you make an early hotel booking (http://www.hofvanwageningen.nl/?language=en)  as the hotel is already quite full. Please indicate to the hotel that you are booking a room for the ‘expert seminar realist evaluation’.

Note: the expert seminar with Dr. Michael Quinn Patton on ‘developmental evaluation’ unfortunately had to be cancelled due to personal reasons. We hope to organise another opportunity with him early next year.

Looking forward to meeting you here at the expert seminar on realist evaluation!

Cecile Kusters (CDI), Irene Guijt (Learning by Design), Jan Brouwers (Context, international cooperation) and Paula Bilinsky (CDI)

Kind regards / Hartelijke groeten,

Cecile Kusters
Participatory Planning, Monitoring & Evaluation – Managing for Impact
Multi-Stakeholder Processes and Social Learning
Centre for Development Innovation
Wageningen UR
P.O. Box 88, 6700 AB Wageningen, The Netherlands
Tel. +31 (0)317 481407 (direct), +31 (0)317 486800 (reception)
Fax +31 (0)317 486801
e-mail cecile.kusters@wur.nl
www.cdi.wur.nl<http://www.cdi.wur.nl/>
PPME resource portal: http://portals.wi.wur.nl/ppme/
MSP resource portal: http://portals.wi.wur.nl/msp/
www.disclaim<http://www.disclaimer-uk.wur.nl>er-uk.wur.nl

2010 Annual Report on Results and Impact of IFAD Operations

“The IFAD Office of Evaluation (IOE) has released its eighth Annual Report on Results and Impact of IFAD Operations (ARRI), based on evaluations carried out in 2009. The main objectives of the report are to highlight the results and impact of IFAD-funded operations, and to draw attention to systemic issues and lessons learned with a view to further enhancing the Fund’s development effectiveness. The report synthesizes results from 17 projects evaluated in 2009 and draws upon four country programme evaluations in Argentina, India, Mozambique and Niger, as well as two corporate-level evaluations, namely on innovation and gender.

This year’s ARRI found that the performance of past IFAD-supported operations is, on the whole, moderately satisfactory. However, performance of operations has improved over time in a number of areas (e.g. sustainability, IFAD’s own performance as a partner, and innovation), even though additional improvements can be achieved in the future. The ARRI also found that projects designed more recently tend to perform better than older-generation operations, as overall objectives and design are more realistic and they devote greater attention to results management.

There are areas that remain a challenge, such as efficiency, and natural resources and environment.  The ARRI also notes that IFAD can do more to strengthen government capacity, which is one of the most important factors in achieving results on rural poverty. With regard to efficiency, the ARRI notes that the efficiency of IFAD-funded projects has improved from a low base in 2004/5, even though there is room for further enhancement. Efficiency will be studied by IOE in much deeper detail in 2011, within the framework of the planned corporate level evaluation on the topic.

The report is available on the IFAD website, at the following address: http://www.ifad.org/evaluation/arri/2010/arri.pdf

Hard copies of the report are available with IOE and can be requested via e-mail to evaluation@ifad.org.

Cordial regards,

IOE Evaluation Communication Unit

For further information, please contact:
Mark Keating
Evaluation Information Officer
Office of Evaluation
IFAD
Tel: ++39-06-54592048
e-mail: m.keating@ifad.org

Call for Abstracts – Forthcoming Sri Lankan Evaluation Association Conference June 6 -10, 2011

Date: June 6 -10, 2011
Venue: Colombo, Sri Lanka

Dear colleagues
The Sri Lanka Evaluation Association (SLEvA) will convene its fourth Evaluation conference in Colombo this coming June. The conference will start with two days of professional development workshops and will provide an opportunity for sharing knowledge and ideas with development practitioners, evaluators, users of evaluation and policy makers. Please find attached CALL FOR ABSTRACTS.

The overall theme of the conference will be ‘Evaluation for Policy and Action’with the following as subthemes: Evaluation for influencing policy and policy evaluation: Evaluation for supporting development programmes; Evaluation in disaster reduction and management; Evaluating networks and partnerships; Building the evaluation field; Evaluation methodologies and approaches; and other evaluation issues.#

SLEvA invites abstracts of papers and proposals for panels, posters and exhibits. The abstracts should not be more than 250 words and should be sent by 15 April to sleva@sltnet.lk cc´ing somadesilva@gmail.com

Full information on the conference will be available in the coming days on the SLEvA website: http://www.sleva.lk/

With warm regards,

Ada Ocampo
Regional Advisor – Evaluation
Asia-Pacific Shared Services Centre (APSSC)
UNICEF
19 Phra Atit Road, Bangkok 10200, Thailand
Tel: +66 2 356 9272 Fax: +66 2 280 5941
Email: aocampo@unicef.org
http://www.unicef.org/

“The Truth Wears Off” & “More Thoughts on the Decline Effect”

These two articles by Jonah Lehrer in the New Yorker (13/12/2010, 03/01/2011)  provide a salutary reminder of the limitations of experimental methods as a means of research, when undertaken by mere mortals, with their various limitations. They do not, in my reading, dismiss or invalidate the use of experimental methods, but they do highlight the need for a longer term view about the efficacy of experimental methods, and one which situates their use within a social context.

Many thanks to Irene Guijt for bringing these articles to my attention, and others, via the Pelican email list.

“In “The Truth Wears Off,” Lehrer “wanted to explore the human side of the scientific enterprise. My focus was on a troubling phenomenon often referred to as the “decline effect,” which is the tendency of many exciting scientific results to fade over time. This empirical hiccup afflicts fields from pharmacology to evolutionary biology to social psychology. There is no simple explanation for the decline effect, but the article explores several possibilities, from the publication biases of peer-reviewed journals to the “selective reporting” of scientists who sift through data”

In “More Thoughts on the Decline Effect” Lehrer responds to the various emails, tweets and comments posted in reply to his article. Amongts other things, he says “I think the decline effect is an important reminder that we shouldn’t simply reassure ourselves with platitudes about the rigors of replication or the inevitable corrections of peer review. Although we often pretend that experiments settle the truth for us—that we are mere passive observers, dutifully recording the facts—the reality of science is a lot messier. It is an intensely human process, shaped by all of our usual talents, tendencies, and flaws”

There are also many interesting Comments following both posts by Lehrer

The CES Learning and Innovation Prize is open for entries.

Closing date 17 January 2011

Charities Evaluation Services is celebrating the ways in which charities use monitoring information or evaluation findings to improve their work and influence others with the new Learning and Innovation Prize.

The Prize is aimed at highlighting the contribution that monitoring and evaluation makes to improving service delivery, not just accountability, and rewarding organisations who make the best use of the information that they have.

Please note: the deadline for entries is 5pm, 17th January 2011.

Prize categories

This inspiring new award is split into four categories:

  • small charities (annual turnover under £500,000)
  • large charities (annual turnover over £500,000)
  • funders
  • organisations that support other charities.

Who can enter

Organisations that fit one of the above categories and who have used monitoring information or evaluation findings to improve their work and influence others can enter. For further information and specific criteria, please see the Entry Guidelines below.

We are looking for situations where monitoring or evaluation was done by the organisation itself or where an external evaluator was involved. Winners will be expected to demonstrate evidence that the findings changed something about project or service delivery or use of their resources, or influenced others to do so.

For more information and to download an entry form visit: http://www.ces-vol.org.uk/prize

Charities Evaluation Services (CES) is the UK’s leading provider of training, consultancy and information on evaluation and quality systems in the third sector. We also publish PQASSO, the most widely used quality system in the sector.

CES is an independent charity. We work with third sector organisations and their funders.

Impact Evaluation Conference: “Mind the Gap”: From Evidence to Impact

Date: June 15-17 2011
Venue: Cuernavaca, Mexico

Each year billions of dollars are spent on tackling global poverty. Development programs and policies are designed to build sustainable livelihoods and improve lives. But is there real evidence to show which programs work and why? Are government and donor policies based on concrete and credible evidence?

The Mind the Gap conference on impact evaluation will address these questions and offer possible solutions. With a focus on Latin American Countries the conference will take place in Cuernavaca, Mexico, June 15-17, 2011. Co-hosted by The International Initiative for Impact Evaluation (3ie), the National Institute of Public Health of Mexico (INSP), the Inter-American Development Bank (IADB) and the Center for Labor and Social Distributive Studies in coordination with the Impact Evaluation Network and the Poverty and Economic Policy Network (CEDLAS-IEN-PEP).

This conference will provide a platform to share and discuss experiences on how to best achieve evidence-based policy in sectors that are highly relevant for Latin America. To this end, the conference will mainstream a policy-focus into all its activities. The plenary sessions will address the challenges and progress made in building evidence into policy-making processes. The sector-focused sessions will be asked to address the engagement of stakeholders and policy-makers in the various studies presented. The conference will be preceded by a range of pre-conference clinics tailored to the interests and needs of both researchers and program managers.

The conference will accommodate only 400 attendees. The official languages of the Conference are Spanish and English. Simultaneous translation will be provided for all conference sessions.Please register early to secure your attendance. Registration will open March 1st. 2011. Early bird rates will be offered.

Check the conference website often for up to date conference information.  http://www.impactevaluation2011.org/

Bursaries are being made available to developing country participants with a proven interest in impact evaluation.

Bursary applications will open March 1st giving preference to authors of accepted abstracts.

Expert seminar with Dr MQ Patton ‘Developmental evaluation’

Date: 29th March 2011
Venue: in the Netherlands

NOW CANCELLED. Further information will be provided when available

With pleasure we would like to announce an expert seminar with Dr. Michael Quinn Patton on ‘Developmental evaluation – new kid on the evaluation block’.

Developmental evaluation is based on insights from complex dynamic systems, uncertainty, nonlinearity, and emergence. World renowned, award-winning evaluation expert Dr. Michael Quinn Patton will discuss the developmental evaluation framework as detailed in his book `Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use’. Patton will illustrate how developmental evaluation can be used for a range of purposes: ongoing program development; adapting effective principles of practice to local contexts; generating innovations and taking them to scale; and facilitating rapid response in crisis situations.

Participant discussions will focus on development evaluation’s value for the development sector. How is developmental evaluation different from existing practice, ongoing reflective monitoring – in development? What new insights can help strengthen the utility of evaluation for development?

During the morning, Dr. Patton will explain developmental evaluation and illustrate it with many examples from his own experience. In the afternoon, participants will debate the practical application of developmental evaluation in development, based on participants’ existing evaluation questions.

For those interested and not to far away around that time, please do come and join this interesting event!

For more info and registration: http://www.cdi.wur.nl/UK/newsagenda/agenda/DevelopmentalEvaluation_MichaelPatton
We suggest to make an early hotel booking (http://www.hofvanwageningen.nl/?language=en) as the hotel is already quite full. Also indicate to the hotel that you are booking a room for the ‘expert seminar with Patton’.

Looking forward to meeting you here!

Cecile Kusters (CDI), Irene Guijt (Learning by Design), Jan Brouwers (Context, international cooperation) and Karel Chambille (Hivos)
Kind regards / Hartelijke groeten,

Cecile Kusters
Participatory Planning, Monitoring & Evaluation – Managing for Impact
Multi-Stakeholder Processes and Social Learning
Centre for Development Innovation
Wageningen UR
P.O. Box 88, 6700 AB Wageningen, The Netherlands
Tel. +31 (0)317 481407 (direct), +31 (0)317 486800 (reception)
Fax +31 (0)317 486801
e-mail cecile.kusters@wur.nl
www.cdi.wur.nl
PPME resource portal: http://portals.wi.wur.nl/ppme/
MSP resource portal: http://portals.wi.wur.nl/msp/
www.disclaimer-uk.wur.nl

What Accountability Pressures do MPs in Africa Face and How Do They Respond? Evidence from Ghana

Source:  Lindberg, S., 2010,  Journal of Modern African Studies, Vol. 48, No. 1, pp. 117-142 VIA Governance and Social Development Resource Centre ]

Summary: What is the role of clientelism in African politics? How are MPs held accountable in Ghana? This article examines the daily accountability pressures and responses of Ghanaian Members of Parliament, the strength of the institution, and the formal and informal aspects of their role. It finds that these MPs devote a significant proportion of their time to producing and distributing private goods to constituents, and to constituent service. Marginal attention is devoted to legislating and executive oversight. Some MPs have been able to counter political clientelism, however, through civic education and by reformulating constituent expectations toward the production of collective, public goods.

Despite the rapid expansion in research on African politics, little is known about the daily behaviour of legislators, their accountability pressures and responses. This case study on Ghana finds that groups that hold MPs accountable include constituents, the local party, extended family, chiefs, religious leaders, civil society organisations (CSOs) and businesses (although these last two appear to exert little pressure). They require MPs to perform five core duties – the provision of private goods, constituency service, constituency representation, legislation and executive oversight:

  • Personal benefits and clientelistic goods: This type of accountability is the most common in MPs’ relationships with their constituents and is the one that puts the most pressure on MPs. Different groups have varied expectations of the form that such benefits should take. They range from monetary assistance (such as school fees or small business start-up costs) to the provision of jobs. There is a clear division between rural and urban constituencies; urban MPs have much greater resistance to constituent demands.
  • Constituency service as community development: This is an area of heavy emphasis for constituents and chiefs, causing MPs to spend a lot of their time lobbying ministers for development projects for their area.
  • Constituency representation: There is a strong expectation of MPs to be heard in debates and to have a media presence. This is anchored in the traditional notion of family heads ‘speaking up’ for their people.
  • Legislation and executive oversight: It is primarily the executive which exerts pressure on MPs regarding legislation, particularly regarding voting conformity (by withholding of seats on lucrative tender boards). Active public debate and scrutiny are compromised due to the strength of the executive over the legislature.

The clientelistic relationship between the MP and constituents stems from traditional notions of ‘head of the family’, one who has a moral obligation to solve problems for followers in need. The hybrid role of MP as family head places enormous pressures on officeholders to be responsive to constituents’ needs and priorities. MPs face the dual sanctions of losing office at election time and the informal shame, harassment and loss of status within the context of family and community. However, some MPs have been successful in translating the informal family head role into pressure for the production of collective goods by engaging in civic education and raising political awareness:

  • MPs that have held regular community meetings to explain legislative business and policy have been successful in developing a strong voice for collective goods.
  • Focusing expectations on collective, public, and national-level goods has significantly reduced pressure on MPs to personally provide private goods.
  • It has also increased constituent perception of the importance of legislative behaviour for chances of re-election. This in turn has reduced clientelistic behaviour and promoted democratic responsiveness.

Access full text: available online

Monitoring and Evaluating Civil Service Performance

[from the Research Helpdesk of the Governance and Social Development Resource Centre ]

Request: Summarise recent research findings and intellectual debate on how to best monitor and evaluate civil service performance, including international best practice and issues around standardised indicators (along the lines of the PEFA framework).

Key findings: There continues to be debate as to how best to monitor and evaluate civil service performance. This debate relates to what to measure, the best indicators to use, whether such a framework is appropriate and how best to implement a chosen framework.<>

When creating evaluation procedures for civil service performance it is important to clarify the level of evaluation. Is it at an individual level, a team level, an institutional level, or at system level? There is currently no performance appraisal system which has been widely considered objective and effective for assessing performance at an individual level.

UNDP (2009) currently provides the most comprehensive guide to measuring public administration performance. The first part of the guide consists of guidance based on feedback from users of assessments tools and a distillation of good practices. The second part provides detailed information on public administration assessment tools, with nine assessment tools provided for assessing Public Human Resource Management. Many of these tools derive their indicators from private sector practice. The World Bank’s Actionable Governance Indicators Instrument is arguably the most comprehensive in terms of breadth of indicators.

Full response: http://www.gsdrc.org/docs/open/HD722.pdf

%d bloggers like this: