Making government budgets more accessible and equitable

(from ID21)

Involvement in the budget process in poor countries has traditionally been limited to a select group of political actors. But this has changed over the last decade with legislators, civil society groups and the media playing a more active role. What impact is broader engagement having?

Research from the Institute of Development Studies, UK, examines the substance and impact of applied budget work undertaken by civil society groups. The research draws on six case studies of independent budget work in Brazil, Croatia, India, Mexico, South Africa and Uganda. One focus of the research is how civil society budget work influences government budget priorities and spending in a way that benefits poor and socially excluded groups.

Budget work is carried out by various types of organisations including non-government organisations (NGOs), networks and social movements, and research organisations. All the groups examined in the case studies share a commitment to increasing the influence of poor and marginalised groups in the budget process and ensuring that budget priorities reflect the needs of these groups.

The six organisations all engage in certain core activities centred on data analysis and dissemination, advocacy and capacity building. Most work on national and state-level budgets, though several groups also work at the local government level.

The research shows that independent budget work has the potential to deepen democracy by strengthening accountability, fostering transparency and encouraging participation. It can also increase financial allocations in areas that contribute to social justice and equity outcomes and ensure that public money is efficiently spent.

The research also reveals the limits to budget work. Any increases in financial allocations secured as a result of advocacy initiatives are likely to represent a small share of overall government spending. Also, the scope of budget work to influence financial allocations depends on the openness and flexibility of the budget process (spending priorities may not be open to change).

The impacts of budget work identified by the research include:

  • improving the transparency of budget decisions and budget processes and increasing the accountability of state actors
  • increasing awareness and understanding of budget issues
  • improving budget allocations in a way that benefits poor and socially excluded groups
  • ensuring better use of spending, for example in areas such as health and education, and reducing corruption (by tracking expenditures)
  • diversifying the range of actors engaged in budget processes (for example, legislators, civil society groups and the media)
  • strengthening democracy and deepening participation.

The research concludes that:

  • Budget work has been successful in a range of areas, including improving equity and social justice outcomes.
  • The technical nature of the budget process limits the scope for broadening citizen participation.
  • The challenge for budget groups is how to scale-up and replicate the successful impacts achieved to date.
  • Influencing budget policies requires a combination of sound technical knowledge, effective communications and strategic alliances.
  • Promoting the voice of poor and socially excluded groups is an important indirect effect of budget work.

Source(s):
‘Budget Analysis and Policy Advocacy: The Role of Non-governmental Public Action’, IDS Working Paper 279, IDS: Brighton, by Mark Robinson, 2006 Full document.

Funded by: UK Economic and Social Research Council

id21 Research Highlight: 16 August 2007

Further Information:
Mark Robinson
Policy and Research Division
UK Department for International Development (DFID)
1 Palace Street
London SW1E 5HE
UK

Tel: +44 (0)20 70230000
Fax: +44 (0)20 70230636
Contact the contributor: mark-robinson@dfid.gov.uk

Results Based Management (RBM): A list of resources


CIDA website: Results-based Management

Results-based Management (RBM) is a comprehensive, life-cycle approach to management that integrates business strategy, people, processes, and measurements to improve decision-making and to drive change.

The approach focuses on getting the right design early in a process, implementing performance measurement, learning and changing, and reporting on performance.

  • RBM Guides
  • RBM Reports
  • Related Performance Sites

  • ADB website: Results Based Management Explained

    Results Based Management (RBM) can mean different things to different people. A simple explanation is that RBM is the way an organization is motivated and applies processes and resources to achieve targeted results.

    Results refer to outcomes that convey benefits to the community (e.g. Education for All (EFA), targets set in both Mongolia and Cambodia). Results also encompass the service outputs that make those outcomes possible (such as trained students and trained teachers). The term ‘results’ can also refer to internal outputs such as services provided by one part of the organization for use by another. The key issue is that results differ from ‘activities’ or ‘functions’. Many people when asked what they produce (services) describe what they do (activities).

    RBM encompasses four dimensions, namely:

    • specified results that are measurable, monitorable and relevant
    • resources that are adequate for achieving the targeted results
    • organizational arrangements that ensure authority and responsibilities are aligned with results and resources
    • processes for planning, monitoring, communicating and resource release that enable the organization to convert resources into the desired results.

    RBM may use some new words or apply specific meanings to some words in general usage. Check introduction to RBM presentation[PDF | 56 pages].

    RBM references that provide more background


    UNFPA website: Results-Based Management at UNFPA

    There is a broad trend among public sector institutions towards Results-Based Management–RBM. Development agencies, bilateral such as Canada, the Netherlands, UK, and the US as well as multilateral such as UNDP, UNICEF and the World Bank, are adopting RBM with the aim to improve programme and management effectiveness and accountability and achieve results.

    RBM is fundamental to the Fund’s approach and practice in fulfilling its mandate and effectively providing assistance to developing countries. At UNFPA, RBM means:

    • Establishing clear organizational vision, mission and priorities, which are translated into a four-year framework of goals, outputs, indicators, strategies and resources (MYFF);
    • Encouraging an organizational and management culture that promotes innovation, learning, accountability, and transparency;
    • Delegating authority and empowering managers and holding them accountable for results;
    • Focusing on achieving results, through strategic planning, regular monitoring of progress, evaluation of performance, and reporting on performance;
    • Creating supportive mechanisms, policies and procedures, building and improving on what is in place, including the operationalization of the logframe;
    • Sharing information and knowledge, learning lessons, and feeding these back into improving decision-making and performance;
    • Optimizing human resources and building capacity among UNFPA staff and national partners to manage for results;
    • Making the best use of scarce financial resources in an efficient manner to achieve results;
    • Strengthening and diversifying partnerships at all levels towards achieving results;
    • Responding to the realities of country situations and needs, within the organizational mandate.

    OECD report: RESULTS BASED MANAGEMENT IN THE DEVELOPMENT CO-OPERATION AGENCIES: A REVIEW OF EXPERIENCE BACKGROUND REPORT

    In order to respond to the need for an overview of the rapid evolution of RBM, the DAC Working Party on Aid Evaluation initiated a study of performance management systems. The ensuing draft report was
    presented to the February 2000 meeting of the WP-EV and the document was subsequently revised.
    It was written by Ms. Annette Binnendijk, consultant to the DAC WP-EV.

    This review constitutes the first phase of the project; a second phase involving key informant interviews in a number of agencies is due for completion by November 2001.

    158 pages, 12 page conclusion


    this list has a long way to go….!

    The road to nowhere? Results based management in international cooperation

    howard white provides a critique of this approach


    Results-based management has become a fact of life for development agencies. They might hope to learn from the experience of the US Agency for International Development (USAID) which has already gone down this road. It is indeed instructive that USAID has come back up the road again saying ‘there’s nothing down there’. But development agencies have been rightly criticised in the past for paying too little attention to the final impact of their activities, so we would like to support a results-based approach. But we should not do so blindly when it suffers from the severe limitations outlined below. Serious attempts to link agency performance to developmental outcomes must rely upon the log-frame. The log-frame is not a universal panacea but, used properly, can force agencies into a critical examination of the nature of their programmes and projects, and the results they achieve.

    This posting is available in full at the Euforic website

    M&E blogs: A List

    • EvalThoughts, by Amy Germuth, Durham, NC, United States, President of EvalWorks, LLC a woman-owned small evaluation and survey research consulting business in Durham, NC.
    • Evaluation and Benchmarking. “This weblog is an on-line workspace for the whole of Victorian government Benchmarking Community of Practice.”
    • M&E Blog, by…?
    • Aid on the Edge of Chaos, by Ben Ramalingam
    • Design, Monitoring and Evaluation, by LARRY DERSHEM – Tbilisi, Georgia
    • Managing for Impact: About “Strengthening Management for Impact” for MFIs
    • Genuine Evaluation: “Patricia J Rogers and E Jane Davidson blog about real, genuine, authentic, practical evaluation”
    • Practical Evaluation, by Samuel Norgah
    • AID/IT M&E Blog: “…is written by Paul Crawford, and is part of a wider AID/IT website”
    • Blog: Evaluateca: Spanish language evaluation blog maintained by Rafael Monterde Diaz. Information, news, views and critical comments on Evaluation
    • Empowerment Evaluation Blog “This is a place for exchanges and discussions about empowerment evaluation practice, theory, and current debates in the literature” Run by  Dr. David Fetterman”
    • E-valuation: “constructing a good life through the exploration of value and valuing” by Sandra Mathison,Professor, Faculty of Education, University of British Columbia
    • Intelligent Measurement. This blog is created by Richard Gaunt in London and Glenn O’Neil in Geneva and focuses on evaluation and measurement in communications, training, management and other fields.
    • Managing for Impact: Let’s talk about MandE! “Welcome to the dedicated SMIP ERIL blog on M&E for managing for impact!An IFAD funded Regional Programme, SMIP (Strengthening Management for Impact) is working with pro-poor initiatives in eastern & southern Africa to build capacities to better manage towards impact. It does so through training courses for individuals, technical support to projects & programmes, generating knowledge, providing opportunities for on-the-job-training, and policy dialogue.”
    • MCA Monitor Blog “…is a part of CGD’s MCA Monitor Initiative, which tracks the effectiveness of the US Millennium Challenge Account. Sheila Herrling, Steve Radelet and Amy Crone, key members of CGD’s MCA Monitor team, contribute regularly to the blog. We encourage you to join the discussion by commenting on any post”
    • OutcomesBlog.Org “Dr Paul Duignan on real world strategy, outcomes, evaluation & monitoring Dr Paul Duignan is a specialist in outcomes, performance management, strategic decision making, evaluation and assessing research and evidence as the basis for decision making. He has developed the area of outcomes theory and its application in Systematic Outcomes Analysis, the outcomes software DoView and the simplified approach to his work Easy Outcomes. He works at an individual, organizational and societal level to develop ways of identifying and measuring outcomes which facilitate effective action. For a bio see here.
    • Rick on the Road: “Reflections on the monitoring and evaluation of development aid projects, programmes and policies, and development of organisation’s capacity to do the same. This blog also functions as the Editorial section of the MandE NEWS website
    • The Usable Blog “A blog on “Thoughts, ideas and resources for non-profit organizations and funders about the independent sector in general and program evaluation in particular” By Eric Graig “
    • The MSC Translations blog is maintained by Rick Davies, and is part of the MandE NEWS website. The purpose of this blog is:1. To make available translations of the MSC Guide in languages other than English. 2. To solicit and share comments on the quality of these translations, so they can be improved.The original English version can be found here The ‘Most Significant Change’ (MSC) Technique: A Guide to Its Use
    • Zen and the art of monitoring & evaluation “This blog is some of the rambling thoughts of Paul Crawford, a monitoring & evaluation (M&E) consultant for international aid organisations” Paul is based in Australia.

    And other lists of M&E blogs

    Improving health services through community score cards. A case study from Andhra Pradesh, India

    Case study 1, Andhra Pradesh, India : improving health services through community score cards
    MISRA, Vivek et al , August 2007

    This eight page note summarises the findings, processes, concerns, and lessons learned from a project in Andhra Pradesh – one of six pilot projects aimed at the application of specific social accountability tools in different contexts of service delivery

    SYSTEMS CONCEPTS IN EVALUATION: AN EXPERT ANTHOLOGY

    Bob Williams and Iraj Imam (eds.)
    EdgePress/American Evaluation Association (2007)

    Systems Concepts in Evaluation: An Expert Anthology brings
    together a wide range of systems concepts, methodologies and
    methods and applies them to evaluation settings. This book
    addresses the questions:

    • What is a systems approach?
    • What makes it different from other approaches?
    • Why is it relevant to evaluation?

    The 14 chapters cover a wide range of systems concepts and methods. Most chapters are case study
    based and describe the use of systems concepts in real life evaluations. The approaches and methods
    covered include:

    • System Dynamics (both quantitative and qualitative)
    • Cybernetics and the Viable System Model
    • Soft Systems Methodology
    • Critical Systems Thinking
    • Complex Adaptive Systems

    There are also overview chapters that explore the history and diversity of systems approaches and their
    potential within the evaluation field. There is a substantial introduction by Gerald Midgley to the key
    developments in systems concepts and methods over the past 50 years, and this explores the
    implications for evaluation of each of those developments.

    Although focused on evaluation, the book is a valuable source for anyone interested in systems concepts,
    action research and reflective inquiry. It is useful for both teaching and practice.

    Chapters :
    Introduction, Iraj Imam, Amy LaGoy, Bob Williams and authors
    Systems Thinking for Evaluation, Gerald Midgley
    A Systemic Evaluation of an Agricultural Development: A Focus on the Worldview Challenge,
    Richard Bawden
    System Dynamics-based Computer Simulations and Evaluation, Daniel D Burke
    A Cybernetic Evaluation of Organizational Information Systems, Dale Fitch, Ph.D.
    Soft Systems in a Hardening World: Evaluating Urban Regeneration, Kate Attenborough
    Using Dialectic Soft Systems Methodology as an Ongoing Self-evaluation Process for a
    Singapore Railway Service Provider, Dr Boon Hou Tay & Mr Bobby, Kee Pong Lim
    Evaluation Based on Critical Systems Heuristics, Martin Reynolds
    Human Systems Dynamics: Complexity-based Approach to a Complex Evaluation, Glenda H
    Eoyang, Ph.D.
    Evaluating Farm and Food Systems in the US, Kenneth A Meter
    Systemic Evaluation in the Field of Regional Development, Richard Hummelbrunner
    Evaluation in Complex Governance Arenas: the Potential of Large System Action Research,
    Danny Burns
    Evolutionary and Behavioral Characteristics of Systems, Jay Forrest
    Concluding Comments, Iraj Imam, Amy LaGoy, Bob Williams and authors

    PUBLICATION AND PURCHASE DETAILS

    NAME : Systems Concepts in Evaluation : An Expert Reader
    EDITORS : Bob Williams and Iraj Imam
    PAGES : 222pp

    ISBN 978-0-918528-22-3 paperback
    ISBN 978-0-918528-21-6 hardbound

    PUBLISHER :

    EdgePress/American Evaluation Association (2007)

    PURCHASE

    Available via Amazon : Hardback only. $US36 plus postage

    Pathways for change: monitoring and evaluation

    This Brief is an edited summary, prepared by Susanne Turrall, of a paper written by Kath Pasteur
    and Susanne Turrall (2006): A synthesis of monitoring and evaluation experience in the Renewable
    Natural Resources Research Strategy
    .

    “Monitoring and evaluation (M&E) plays a central role in ensuring accountability, informing decision- making and, more broadly, facilitating learning. The programmes within the DFID-funded Renewable Natural Resources Research Strategy (RNRRS) have developed some innovative methods of M&E. The RNRRS also saw an evolution in thinking in M&E, moving from a focus on the M&E of research products to a recognition that the context and mechanisms for adoption of research products are equally important, as is the effect on poverty reduction.”
    Continue reading “Pathways for change: monitoring and evaluation”

    Horizontal Evaluation: Fostering Knowledge Sharing and Program Improvement within a Network

    Authors: Thiele, Graham; Devaux, Andre; Velasco, Claudio; Horton, Douglas
    American Journal of Evaluation, v28 n4 p493-508 2007

    Abstract: Horizontal evaluation combines self-assessment and external evaluation by peers. Papa Andina, a regional network that works to reduce rural poverty in the Andean region by fostering innovation in potato production and marketing, has used horizontal evaluations to improve the work of local project teams and to share knowledge within the network. In a horizontal evaluation workshop, a project team and peers from other organizations independently assess the strengths and weaknesses of a research and development (R&D) approach being developed and then compare the assessments. Project team members formulate recommendations for improving the R&D approach, and peers consider ways to apply it back home. Practical results of horizontal evaluation have included strengthening the R&D approaches being developed, experimenting with their use at new sites, improvements in other areas of work, and strengthened interpersonal relations among network members. (Contains 2 tables.)”

    Also available as ILAC Brief: http://www.cgiar-ilac.org/files/publications/briefs/ILAC_Brief13_Horizontal_Evaluation.pdf

    And a  Spanish version of the same Brief

    Evaluation Of Citizens’ Voice & Accountability – Review Of The Literature & Donor Approaches Report

    O’Neill, T., Foresti, M. and Hudson, A. (2007) Evaluation of Citizens’ Voice and Accountability: Review of the Literature and Donor Approaches. London: DFID.

    Excerpt

    1.3 A core group of DAC partners are collaborating on a joint evaluation of development
    aid for strengthening citizens’ voice and the accountability of public institutions. The
    Overseas Development Institute has been contracted to undertake the first stage of
    this evaluation, which involves the development and piloting of an evaluation
    framework. This literature review is the first output from this first phase. It aims to: (i)
    review the theoretical debates on voice and accountability and how they relate to
    development; (ii) review the different donor approaches to supporting voice and
    accountability and identify commonalities and differences across contexts; (iii)
    provide an overview of evaluation theory and practice in relation to voice and
    accountability interventions; and (iv) identify key knowledge gaps in relation to the
    effectiveness of donors in supporting voice and accountability.

    1.4 This review has three main sections. Section 2 surveys the academic literature to
    present current thinking on what voice and accountability means, how they operate in
    practice and how they relate to the achievement of broader development objectives.
    Section 3 turns to the donors’ own understanding of voice and accountability as set
    out in their relevant policy and guidance documents. It discusses how the donors see
    voice and accountability contributing to their poverty reduction mandates and what
    approaches they have adopted to strengthen them, including in different contexts.
    Section 4 considers the main issues relating to the evaluation of interventions to
    strengthen voice and accountability. It first reviews some of the methodological
    debates in the theoretical literature before summarising the donors’ own evaluative
    efforts in this field, identifying both common findings and key gaps in their
    knowledge.

    Contents:
    1. Introduction 1
    2. Voice and Accountability: A view from the literature 3
    Voice and accountability: a basic static model 3
    Voice and accountability: a complex dynamic reality 5
    Relating voice and accountability to other key concepts 6
    Voice, accountability and development outcomes 9
    3. Voice and accountability: A view from the donors 13
    Why do donors want to strengthen voice and accountability? 13
    What strategies do donors adopt for strengthening voice and accountability? 18
    Do donor approaches take account of context? 25
    4. Evaluating voice and accountability 29
    Approaches and frameworks for evaluating voice and accountability interventions 29
    What have donors learnt about their effectiveness? 36
    5. Conclusions 47
    Annexes 49
    References 53