Results Based Management (RBM): A list of resources


CIDA website: Results-based Management

Results-based Management (RBM) is a comprehensive, life-cycle approach to management that integrates business strategy, people, processes, and measurements to improve decision-making and to drive change.

The approach focuses on getting the right design early in a process, implementing performance measurement, learning and changing, and reporting on performance.

  • RBM Guides
  • RBM Reports
  • Related Performance Sites

  • ADB website: Results Based Management Explained

    Results Based Management (RBM) can mean different things to different people. A simple explanation is that RBM is the way an organization is motivated and applies processes and resources to achieve targeted results.

    Results refer to outcomes that convey benefits to the community (e.g. Education for All (EFA), targets set in both Mongolia and Cambodia). Results also encompass the service outputs that make those outcomes possible (such as trained students and trained teachers). The term ‘results’ can also refer to internal outputs such as services provided by one part of the organization for use by another. The key issue is that results differ from ‘activities’ or ‘functions’. Many people when asked what they produce (services) describe what they do (activities).

    RBM encompasses four dimensions, namely:

    • specified results that are measurable, monitorable and relevant
    • resources that are adequate for achieving the targeted results
    • organizational arrangements that ensure authority and responsibilities are aligned with results and resources
    • processes for planning, monitoring, communicating and resource release that enable the organization to convert resources into the desired results.

    RBM may use some new words or apply specific meanings to some words in general usage. Check introduction to RBM presentation[PDF | 56 pages].

    RBM references that provide more background


    UNFPA website: Results-Based Management at UNFPA

    There is a broad trend among public sector institutions towards Results-Based Management–RBM. Development agencies, bilateral such as Canada, the Netherlands, UK, and the US as well as multilateral such as UNDP, UNICEF and the World Bank, are adopting RBM with the aim to improve programme and management effectiveness and accountability and achieve results.

    RBM is fundamental to the Fund’s approach and practice in fulfilling its mandate and effectively providing assistance to developing countries. At UNFPA, RBM means:

    • Establishing clear organizational vision, mission and priorities, which are translated into a four-year framework of goals, outputs, indicators, strategies and resources (MYFF);
    • Encouraging an organizational and management culture that promotes innovation, learning, accountability, and transparency;
    • Delegating authority and empowering managers and holding them accountable for results;
    • Focusing on achieving results, through strategic planning, regular monitoring of progress, evaluation of performance, and reporting on performance;
    • Creating supportive mechanisms, policies and procedures, building and improving on what is in place, including the operationalization of the logframe;
    • Sharing information and knowledge, learning lessons, and feeding these back into improving decision-making and performance;
    • Optimizing human resources and building capacity among UNFPA staff and national partners to manage for results;
    • Making the best use of scarce financial resources in an efficient manner to achieve results;
    • Strengthening and diversifying partnerships at all levels towards achieving results;
    • Responding to the realities of country situations and needs, within the organizational mandate.

    OECD report: RESULTS BASED MANAGEMENT IN THE DEVELOPMENT CO-OPERATION AGENCIES: A REVIEW OF EXPERIENCE BACKGROUND REPORT

    In order to respond to the need for an overview of the rapid evolution of RBM, the DAC Working Party on Aid Evaluation initiated a study of performance management systems. The ensuing draft report was
    presented to the February 2000 meeting of the WP-EV and the document was subsequently revised.
    It was written by Ms. Annette Binnendijk, consultant to the DAC WP-EV.

    This review constitutes the first phase of the project; a second phase involving key informant interviews in a number of agencies is due for completion by November 2001.

    158 pages, 12 page conclusion


    this list has a long way to go….!

    Training providers

    A list is available on the old site, at https://www.mande.co.uk/training.htm

    That list will be moved here, in the near future

    The road to nowhere? Results based management in international cooperation

    howard white provides a critique of this approach


    Results-based management has become a fact of life for development agencies. They might hope to learn from the experience of the US Agency for International Development (USAID) which has already gone down this road. It is indeed instructive that USAID has come back up the road again saying ‘there’s nothing down there’. But development agencies have been rightly criticised in the past for paying too little attention to the final impact of their activities, so we would like to support a results-based approach. But we should not do so blindly when it suffers from the severe limitations outlined below. Serious attempts to link agency performance to developmental outcomes must rely upon the log-frame. The log-frame is not a universal panacea but, used properly, can force agencies into a critical examination of the nature of their programmes and projects, and the results they achieve.

    This posting is available in full at the Euforic website

    M&E blogs: A List

    • EvalThoughts, by Amy Germuth, Durham, NC, United States, President of EvalWorks, LLC a woman-owned small evaluation and survey research consulting business in Durham, NC.
    • Evaluation and Benchmarking. “This weblog is an on-line workspace for the whole of Victorian government Benchmarking Community of Practice.”
    • M&E Blog, by…?
    • Aid on the Edge of Chaos, by Ben Ramalingam
    • Design, Monitoring and Evaluation, by LARRY DERSHEM – Tbilisi, Georgia
    • Managing for Impact: About “Strengthening Management for Impact” for MFIs
    • Genuine Evaluation: “Patricia J Rogers and E Jane Davidson blog about real, genuine, authentic, practical evaluation”
    • Practical Evaluation, by Samuel Norgah
    • AID/IT M&E Blog: “…is written by Paul Crawford, and is part of a wider AID/IT website”
    • Blog: Evaluateca: Spanish language evaluation blog maintained by Rafael Monterde Diaz. Information, news, views and critical comments on Evaluation
    • Empowerment Evaluation Blog “This is a place for exchanges and discussions about empowerment evaluation practice, theory, and current debates in the literature” Run by  Dr. David Fetterman”
    • E-valuation: “constructing a good life through the exploration of value and valuing” by Sandra Mathison,Professor, Faculty of Education, University of British Columbia
    • Intelligent Measurement. This blog is created by Richard Gaunt in London and Glenn O’Neil in Geneva and focuses on evaluation and measurement in communications, training, management and other fields.
    • Managing for Impact: Let’s talk about MandE! “Welcome to the dedicated SMIP ERIL blog on M&E for managing for impact!An IFAD funded Regional Programme, SMIP (Strengthening Management for Impact) is working with pro-poor initiatives in eastern & southern Africa to build capacities to better manage towards impact. It does so through training courses for individuals, technical support to projects & programmes, generating knowledge, providing opportunities for on-the-job-training, and policy dialogue.”
    • MCA Monitor Blog “…is a part of CGD’s MCA Monitor Initiative, which tracks the effectiveness of the US Millennium Challenge Account. Sheila Herrling, Steve Radelet and Amy Crone, key members of CGD’s MCA Monitor team, contribute regularly to the blog. We encourage you to join the discussion by commenting on any post”
    • OutcomesBlog.Org “Dr Paul Duignan on real world strategy, outcomes, evaluation & monitoring Dr Paul Duignan is a specialist in outcomes, performance management, strategic decision making, evaluation and assessing research and evidence as the basis for decision making. He has developed the area of outcomes theory and its application in Systematic Outcomes Analysis, the outcomes software DoView and the simplified approach to his work Easy Outcomes. He works at an individual, organizational and societal level to develop ways of identifying and measuring outcomes which facilitate effective action. For a bio see here.
    • Rick on the Road: “Reflections on the monitoring and evaluation of development aid projects, programmes and policies, and development of organisation’s capacity to do the same. This blog also functions as the Editorial section of the MandE NEWS website
    • The Usable Blog “A blog on “Thoughts, ideas and resources for non-profit organizations and funders about the independent sector in general and program evaluation in particular” By Eric Graig “
    • The MSC Translations blog is maintained by Rick Davies, and is part of the MandE NEWS website. The purpose of this blog is:1. To make available translations of the MSC Guide in languages other than English. 2. To solicit and share comments on the quality of these translations, so they can be improved.The original English version can be found here The ‘Most Significant Change’ (MSC) Technique: A Guide to Its Use
    • Zen and the art of monitoring & evaluation “This blog is some of the rambling thoughts of Paul Crawford, a monitoring & evaluation (M&E) consultant for international aid organisations” Paul is based in Australia.

    And other lists of M&E blogs

    Improving health services through community score cards. A case study from Andhra Pradesh, India

    Case study 1, Andhra Pradesh, India : improving health services through community score cards
    MISRA, Vivek et al , August 2007

    This eight page note summarises the findings, processes, concerns, and lessons learned from a project in Andhra Pradesh – one of six pilot projects aimed at the application of specific social accountability tools in different contexts of service delivery

    SYSTEMS CONCEPTS IN EVALUATION: AN EXPERT ANTHOLOGY

    Bob Williams and Iraj Imam (eds.)
    EdgePress/American Evaluation Association (2007)

    Systems Concepts in Evaluation: An Expert Anthology brings
    together a wide range of systems concepts, methodologies and
    methods and applies them to evaluation settings. This book
    addresses the questions:

    • What is a systems approach?
    • What makes it different from other approaches?
    • Why is it relevant to evaluation?

    The 14 chapters cover a wide range of systems concepts and methods. Most chapters are case study
    based and describe the use of systems concepts in real life evaluations. The approaches and methods
    covered include:

    • System Dynamics (both quantitative and qualitative)
    • Cybernetics and the Viable System Model
    • Soft Systems Methodology
    • Critical Systems Thinking
    • Complex Adaptive Systems

    There are also overview chapters that explore the history and diversity of systems approaches and their
    potential within the evaluation field. There is a substantial introduction by Gerald Midgley to the key
    developments in systems concepts and methods over the past 50 years, and this explores the
    implications for evaluation of each of those developments.

    Although focused on evaluation, the book is a valuable source for anyone interested in systems concepts,
    action research and reflective inquiry. It is useful for both teaching and practice.

    Chapters :
    Introduction, Iraj Imam, Amy LaGoy, Bob Williams and authors
    Systems Thinking for Evaluation, Gerald Midgley
    A Systemic Evaluation of an Agricultural Development: A Focus on the Worldview Challenge,
    Richard Bawden
    System Dynamics-based Computer Simulations and Evaluation, Daniel D Burke
    A Cybernetic Evaluation of Organizational Information Systems, Dale Fitch, Ph.D.
    Soft Systems in a Hardening World: Evaluating Urban Regeneration, Kate Attenborough
    Using Dialectic Soft Systems Methodology as an Ongoing Self-evaluation Process for a
    Singapore Railway Service Provider, Dr Boon Hou Tay & Mr Bobby, Kee Pong Lim
    Evaluation Based on Critical Systems Heuristics, Martin Reynolds
    Human Systems Dynamics: Complexity-based Approach to a Complex Evaluation, Glenda H
    Eoyang, Ph.D.
    Evaluating Farm and Food Systems in the US, Kenneth A Meter
    Systemic Evaluation in the Field of Regional Development, Richard Hummelbrunner
    Evaluation in Complex Governance Arenas: the Potential of Large System Action Research,
    Danny Burns
    Evolutionary and Behavioral Characteristics of Systems, Jay Forrest
    Concluding Comments, Iraj Imam, Amy LaGoy, Bob Williams and authors

    PUBLICATION AND PURCHASE DETAILS

    NAME : Systems Concepts in Evaluation : An Expert Reader
    EDITORS : Bob Williams and Iraj Imam
    PAGES : 222pp

    ISBN 978-0-918528-22-3 paperback
    ISBN 978-0-918528-21-6 hardbound

    PUBLISHER :

    EdgePress/American Evaluation Association (2007)

    PURCHASE

    Available via Amazon : Hardback only. $US36 plus postage

    Pathways for change: monitoring and evaluation

    This Brief is an edited summary, prepared by Susanne Turrall, of a paper written by Kath Pasteur
    and Susanne Turrall (2006): A synthesis of monitoring and evaluation experience in the Renewable
    Natural Resources Research Strategy
    .

    “Monitoring and evaluation (M&E) plays a central role in ensuring accountability, informing decision- making and, more broadly, facilitating learning. The programmes within the DFID-funded Renewable Natural Resources Research Strategy (RNRRS) have developed some innovative methods of M&E. The RNRRS also saw an evolution in thinking in M&E, moving from a focus on the M&E of research products to a recognition that the context and mechanisms for adoption of research products are equally important, as is the effect on poverty reduction.”
    Continue reading “Pathways for change: monitoring and evaluation”

    Horizontal Evaluation: Fostering Knowledge Sharing and Program Improvement within a Network

    Authors: Thiele, Graham; Devaux, Andre; Velasco, Claudio; Horton, Douglas
    American Journal of Evaluation, v28 n4 p493-508 2007

    Abstract: Horizontal evaluation combines self-assessment and external evaluation by peers. Papa Andina, a regional network that works to reduce rural poverty in the Andean region by fostering innovation in potato production and marketing, has used horizontal evaluations to improve the work of local project teams and to share knowledge within the network. In a horizontal evaluation workshop, a project team and peers from other organizations independently assess the strengths and weaknesses of a research and development (R&D) approach being developed and then compare the assessments. Project team members formulate recommendations for improving the R&D approach, and peers consider ways to apply it back home. Practical results of horizontal evaluation have included strengthening the R&D approaches being developed, experimenting with their use at new sites, improvements in other areas of work, and strengthened interpersonal relations among network members. (Contains 2 tables.)”

    Also available as ILAC Brief: http://www.cgiar-ilac.org/files/publications/briefs/ILAC_Brief13_Horizontal_Evaluation.pdf

    And a  Spanish version of the same Brief

    Evaluation Of Citizens’ Voice & Accountability – Review Of The Literature & Donor Approaches Report

    O’Neill, T., Foresti, M. and Hudson, A. (2007) Evaluation of Citizens’ Voice and Accountability: Review of the Literature and Donor Approaches. London: DFID.

    Excerpt

    1.3 A core group of DAC partners are collaborating on a joint evaluation of development
    aid for strengthening citizens’ voice and the accountability of public institutions. The
    Overseas Development Institute has been contracted to undertake the first stage of
    this evaluation, which involves the development and piloting of an evaluation
    framework. This literature review is the first output from this first phase. It aims to: (i)
    review the theoretical debates on voice and accountability and how they relate to
    development; (ii) review the different donor approaches to supporting voice and
    accountability and identify commonalities and differences across contexts; (iii)
    provide an overview of evaluation theory and practice in relation to voice and
    accountability interventions; and (iv) identify key knowledge gaps in relation to the
    effectiveness of donors in supporting voice and accountability.

    1.4 This review has three main sections. Section 2 surveys the academic literature to
    present current thinking on what voice and accountability means, how they operate in
    practice and how they relate to the achievement of broader development objectives.
    Section 3 turns to the donors’ own understanding of voice and accountability as set
    out in their relevant policy and guidance documents. It discusses how the donors see
    voice and accountability contributing to their poverty reduction mandates and what
    approaches they have adopted to strengthen them, including in different contexts.
    Section 4 considers the main issues relating to the evaluation of interventions to
    strengthen voice and accountability. It first reviews some of the methodological
    debates in the theoretical literature before summarising the donors’ own evaluative
    efforts in this field, identifying both common findings and key gaps in their
    knowledge.

    Contents:
    1. Introduction 1
    2. Voice and Accountability: A view from the literature 3
    Voice and accountability: a basic static model 3
    Voice and accountability: a complex dynamic reality 5
    Relating voice and accountability to other key concepts 6
    Voice, accountability and development outcomes 9
    3. Voice and accountability: A view from the donors 13
    Why do donors want to strengthen voice and accountability? 13
    What strategies do donors adopt for strengthening voice and accountability? 18
    Do donor approaches take account of context? 25
    4. Evaluating voice and accountability 29
    Approaches and frameworks for evaluating voice and accountability interventions 29
    What have donors learnt about their effectiveness? 36
    5. Conclusions 47
    Annexes 49
    References 53

    Negotiated Learning: Collaborative Monitoring for Forest Resource Management

    (via Pelican email list)

    Dear all

    Niels has asked me to make you aware of a new publication that some
    ‘Pelican-ers’ might find relevant.

    I have edited a book on how learning and monitoring can become better
    ‘friends’ than is currently usually the case. The book comes off the press
    tomorrow. The full reference: Guijt, Irene, ed. (2007). Negotiated
    Learning: Collaborative Monitoring for Forest Resource Management
    .
    Washington DC, Resources for the Future/Center for International Forestry
    Research. Although the cases in the book focus on natural resource (forest)
    management, the issues about how to create genuine learning through the
    construction, negotiation and implementation of a monitoring process will
    have much wider relevance.

    Full details on how to obtain the book can be found at :
    http://www.rff.org/rff/RFF_Press/CustomBookPages/Negotiated-Learning.cfm ,
    where the book is described as follows :

    “The first book to critically examine how monitoring can be an effective
    tool in participatory resource management, Negotiated Learning draws on the
    first-hand experiences of researchers and development professionals in
    eleven countries in Africa, Asia, and South America. Collective monitoring
    shifts the emphasis of development and conservation professionals from
    externally defined programs to a locally relevant process. It focuses on
    community participation in the selection of the indicators to be monitored
    as well as in the learning and application of knowledge from the data that
    are collected. As with other aspects of collaborative management,
    collaborative monitoring emphasizes building local capacity so that
    communities can gradually assume full responsibility for the management of
    their resources. The cases in Negotiated Learning highlight best practices
    but stress that collaborative monitoring is a relatively new area of theory
    and practice. The cases focus on four themes: the
    challenge of data-driven monitoring in forest systems that supply multiple
    products and serve diverse functions and stakeholders; the importance of
    building upon existing dialogue and learning systems; the need to better
    understand social and political differences among local users and other
    stakeholders; and the need to ensure the continuing adaptiveness of
    monitoring systems.”

    PS: Links to full texts of some chapters

    Chap8_McDougall.pdf

    Chapter10_Kamoto.pdf

    Chap13_Conclusion.pdf

    Greetings,

    irene

    Learning by Design

    Bredeweg 31, 6668 AR Randwijk, The Netherlands
    Tel. (0031) 488-491880 Fax. (0031) 488-491844