Vic Murray, University of Victoria, 2004. Available as pdf

Abstract: This paper reviews the underlying theoretical bases for the evaluation of organizational performance. It then examines representative samples of empirical research into actual evaluation practices in a variety of nonprofits in Canada, the U.S. and Britain. Some of the most popular tools and systems for evaluation currently recommended by consultants and others are then reviewed. Looking at this prescriptive literature, it is shown that, by and large, it takes little account of the findings of empirical research and, as a result, its approaches may often prove ineffective. An alternative that attempts to integrate the research findings with practical tools that has value for practitioners is then be suggested.


It is a perplexing, but not uncommon, phenomenon in the world of nonprofit organization studies how little connection there is between the work of those who offer advice on how organizations in this sector might become more effective and that of those who carry out formally designed empirical research into how these organizations actually behave.  Nowhere is this gap between  “how to” and “what is” more apparent than in the field of performance assessment and evaluation.

Can we obtain the required rigour without randomisation? Oxfam GB’s non-experimental Global Performance Framework

Karl Hughes, Claire Hutchings, August 2011. 3ie Working Paper 13. Available as pdf.

[found courtesy of @3ieNews]


“Non-governmental organisations (NGOs) operating in the international development sector need credible, reliable feedback on whether their interventions are making a meaningful difference but they struggle with how they can practically access it. Impact evaluation is research and, like all credible research, it takes time, resources, and expertise to do well, and – despite being under increasing pressure – most NGOs are not set up to rigorously evaluate the bulk of their work. Moreover, many in the sector continue to believe that capturing and tracking data on impact/outcome indicators from only the intervention group is sufficient to understand and demonstrate impact. A number of NGOs have even turned to global outcome indicator tracking as a way of responding to the effectiveness challenge. Unfortunately, this strategy is doomed from the start, given that there are typically a myriad of factors that affect outcome level change. Oxfam GB, however, is pursuing an alternative way of operationalising global indicators. Closing and sufficiently mature projects are being randomly selected each year among six indicator categories and then evaluated, including the extent each has promoted change in relation to a particular global outcome indicator. The approach taken differs depending on the nature of the project. Community-based interventions, for instance, are being evaluated by comparing data collected from both intervention and comparison populations, coupled with the application of statistical methods to control for observable differences between them. A qualitative causal inference method known as process tracing, on the other hand, is being used to assess the effectiveness of the organisation’s advocacy and popular mobilisation interventions. However, recognising that such an approach may not be feasible for all organisations, in addition to Oxfam GB’s desire to pursue complementary strategies, this paper also sets out several other realistic options available to NGOs to step up their game in understanding and demonstrating their impact. These include: 1) partnering with research institutions to rigorously evaluate “strategic” interventions; 2) pursuing more evidence informed programming; 3) using what evaluation resources they do have more effectively; and 4) making modest investments in additional impact evaluation capacity.”

Monitoring and Evaluating Civil Service Performance

[from the Research Helpdesk of the Governance and Social Development Resource Centre ]

Request: Summarise recent research findings and intellectual debate on how to best monitor and evaluate civil service performance, including international best practice and issues around standardised indicators (along the lines of the PEFA framework).

Key findings: There continues to be debate as to how best to monitor and evaluate civil service performance. This debate relates to what to measure, the best indicators to use, whether such a framework is appropriate and how best to implement a chosen framework.<>

When creating evaluation procedures for civil service performance it is important to clarify the level of evaluation. Is it at an individual level, a team level, an institutional level, or at system level? There is currently no performance appraisal system which has been widely considered objective and effective for assessing performance at an individual level.

UNDP (2009) currently provides the most comprehensive guide to measuring public administration performance. The first part of the guide consists of guidance based on feedback from users of assessments tools and a distillation of good practices. The second part provides detailed information on public administration assessment tools, with nine assessment tools provided for assessing Public Human Resource Management. Many of these tools derive their indicators from private sector practice. The World Bank’s Actionable Governance Indicators Instrument is arguably the most comprehensive in terms of breadth of indicators.

Full response: