Results Based Management (RBM): A list of resources

CIDA website: Results-based Management

Results-based Management (RBM) is a comprehensive, life-cycle approach to management that integrates business strategy, people, processes, and measurements to improve decision-making and to drive change.

The approach focuses on getting the right design early in a process, implementing performance measurement, learning and changing, and reporting on performance.

  • RBM Guides
  • RBM Reports
  • Related Performance Sites

  • ADB website: Results Based Management Explained

    Results Based Management (RBM) can mean different things to different people. A simple explanation is that RBM is the way an organization is motivated and applies processes and resources to achieve targeted results.

    Results refer to outcomes that convey benefits to the community (e.g. Education for All (EFA), targets set in both Mongolia and Cambodia). Results also encompass the service outputs that make those outcomes possible (such as trained students and trained teachers). The term ‘results’ can also refer to internal outputs such as services provided by one part of the organization for use by another. The key issue is that results differ from ‘activities’ or ‘functions’. Many people when asked what they produce (services) describe what they do (activities).

    RBM encompasses four dimensions, namely:

    • specified results that are measurable, monitorable and relevant
    • resources that are adequate for achieving the targeted results
    • organizational arrangements that ensure authority and responsibilities are aligned with results and resources
    • processes for planning, monitoring, communicating and resource release that enable the organization to convert resources into the desired results.

    RBM may use some new words or apply specific meanings to some words in general usage. Check introduction to RBM presentation[PDF | 56 pages].

    RBM references that provide more background

    UNFPA website: Results-Based Management at UNFPA

    There is a broad trend among public sector institutions towards Results-Based Management–RBM. Development agencies, bilateral such as Canada, the Netherlands, UK, and the US as well as multilateral such as UNDP, UNICEF and the World Bank, are adopting RBM with the aim to improve programme and management effectiveness and accountability and achieve results.

    RBM is fundamental to the Fund’s approach and practice in fulfilling its mandate and effectively providing assistance to developing countries. At UNFPA, RBM means:

    • Establishing clear organizational vision, mission and priorities, which are translated into a four-year framework of goals, outputs, indicators, strategies and resources (MYFF);
    • Encouraging an organizational and management culture that promotes innovation, learning, accountability, and transparency;
    • Delegating authority and empowering managers and holding them accountable for results;
    • Focusing on achieving results, through strategic planning, regular monitoring of progress, evaluation of performance, and reporting on performance;
    • Creating supportive mechanisms, policies and procedures, building and improving on what is in place, including the operationalization of the logframe;
    • Sharing information and knowledge, learning lessons, and feeding these back into improving decision-making and performance;
    • Optimizing human resources and building capacity among UNFPA staff and national partners to manage for results;
    • Making the best use of scarce financial resources in an efficient manner to achieve results;
    • Strengthening and diversifying partnerships at all levels towards achieving results;
    • Responding to the realities of country situations and needs, within the organizational mandate.


    In order to respond to the need for an overview of the rapid evolution of RBM, the DAC Working Party on Aid Evaluation initiated a study of performance management systems. The ensuing draft report was
    presented to the February 2000 meeting of the WP-EV and the document was subsequently revised.
    It was written by Ms. Annette Binnendijk, consultant to the DAC WP-EV.

    This review constitutes the first phase of the project; a second phase involving key informant interviews in a number of agencies is due for completion by November 2001.

    158 pages, 12 page conclusion

    this list has a long way to go….!

    M&E blogs: A List

    • EvalThoughts, by Amy Germuth, Durham, NC, United States, President of EvalWorks, LLC a woman-owned small evaluation and survey research consulting business in Durham, NC.
    • Evaluation and Benchmarking. “This weblog is an on-line workspace for the whole of Victorian government Benchmarking Community of Practice.”
    • M&E Blog, by…?
    • Aid on the Edge of Chaos, by Ben Ramalingam
    • Design, Monitoring and Evaluation, by LARRY DERSHEM – Tbilisi, Georgia
    • Managing for Impact: About “Strengthening Management for Impact” for MFIs
    • Genuine Evaluation: “Patricia J Rogers and E Jane Davidson blog about real, genuine, authentic, practical evaluation”
    • Practical Evaluation, by Samuel Norgah
    • AID/IT M&E Blog: “…is written by Paul Crawford, and is part of a wider AID/IT website”
    • Blog: Evaluateca: Spanish language evaluation blog maintained by Rafael Monterde Diaz. Information, news, views and critical comments on Evaluation
    • Empowerment Evaluation Blog “This is a place for exchanges and discussions about empowerment evaluation practice, theory, and current debates in the literature” Run by  Dr. David Fetterman”
    • E-valuation: “constructing a good life through the exploration of value and valuing” by Sandra Mathison,Professor, Faculty of Education, University of British Columbia
    • Intelligent Measurement. This blog is created by Richard Gaunt in London and Glenn O’Neil in Geneva and focuses on evaluation and measurement in communications, training, management and other fields.
    • Managing for Impact: Let’s talk about MandE! “Welcome to the dedicated SMIP ERIL blog on M&E for managing for impact!An IFAD funded Regional Programme, SMIP (Strengthening Management for Impact) is working with pro-poor initiatives in eastern & southern Africa to build capacities to better manage towards impact. It does so through training courses for individuals, technical support to projects & programmes, generating knowledge, providing opportunities for on-the-job-training, and policy dialogue.”
    • MCA Monitor Blog “…is a part of CGD’s MCA Monitor Initiative, which tracks the effectiveness of the US Millennium Challenge Account. Sheila Herrling, Steve Radelet and Amy Crone, key members of CGD’s MCA Monitor team, contribute regularly to the blog. We encourage you to join the discussion by commenting on any post”
    • OutcomesBlog.Org “Dr Paul Duignan on real world strategy, outcomes, evaluation & monitoring Dr Paul Duignan is a specialist in outcomes, performance management, strategic decision making, evaluation and assessing research and evidence as the basis for decision making. He has developed the area of outcomes theory and its application in Systematic Outcomes Analysis, the outcomes software DoView and the simplified approach to his work Easy Outcomes. He works at an individual, organizational and societal level to develop ways of identifying and measuring outcomes which facilitate effective action. For a bio see here.
    • Rick on the Road: “Reflections on the monitoring and evaluation of development aid projects, programmes and policies, and development of organisation’s capacity to do the same. This blog also functions as the Editorial section of the MandE NEWS website
    • The Usable Blog “A blog on “Thoughts, ideas and resources for non-profit organizations and funders about the independent sector in general and program evaluation in particular” By Eric Graig “
    • The MSC Translations blog is maintained by Rick Davies, and is part of the MandE NEWS website. The purpose of this blog is:1. To make available translations of the MSC Guide in languages other than English. 2. To solicit and share comments on the quality of these translations, so they can be improved.The original English version can be found here The ‘Most Significant Change’ (MSC) Technique: A Guide to Its Use
    • Zen and the art of monitoring & evaluation “This blog is some of the rambling thoughts of Paul Crawford, a monitoring & evaluation (M&E) consultant for international aid organisations” Paul is based in Australia.

    And other lists of M&E blogs

    Monitoring government policies A toolkit for civil society organisations in Africa

    (identified via Source)

    The toolkit was produced by AFOD, Christian Aid, Trocaire

    This project was started by the three agencies with a view to supporting partner
    organisations, particularly church-based organisations, to hold their governments to
    account for the consequences of their policies. This toolkit specifically targets African

    partners, seeking to share the struggles and successes of partners already monitoring

    government policies with those that are new to this work.
    The development of this toolkit has been an in-depth process. Two consultants were
    commissioned to research and write the toolkit. They were supported by a reference group
    composed of staff from CAFOD, Christian Aid and Trócaire and partner organisations with
    experience in policy monitoring. The draft toolkit was piloted with partners in workshops
    in Malawi, Sierra Leone and Ethiopia. Comments from the reference group and the
    workshops contributed to this final version of the toolkit.


    1.1  Core concepts in policy monitoring 5
    1.2  Identifying problems, causes and solutions 8
    1.3  Beginning to develop a monitoring approach 10
    Interaction  13
    2.1  Different kinds of policies 15
    2.2  Which policies to monitor 18
    2.3  Access to policy information  22
    2.4  Collecting policy documents 24
    Interaction   27
    3.1  Stakeholders of government policies 29
    3.2  Target audiences and partners  31
    3.3  Monitoring by a network of stakeholders 34
    Interaction  37
    4.1  Analysing the content of a policy 39
    4.2  Defining your monitoring objectives 42
    4.3  What kind of evidence do you need? 44
    4.4 Choosing indicators 47
    4.5  Establishing a baseline 50
    Interaction  52
    5.1  Budget basics  55
    5.2  Resources for policy implementation 59
    5.3 Budget analysis 61
    5.4 Interaction  67

    6.1 Interviews  69
    6.2 Surveys 72
    6.3  Analysing survey data and other coded information 77
    6.4  Workshops, focus group discussions and observation 84
    Interaction  89
    Interaction  98

    %d bloggers like this: