Monitoring the composition and evolution of the research networks of the CGIAR Research Program (RTB)

“The ILAC Initiative of the CGIAR has been working in partnership with the CGIAR Research Program on Roots, Tubers and Bananas (RTB) on a study that mapped RTB research network.

The study aimed to design and test a monitoring system to characterize research networks through which research programs activities are conducted. This information is an important tool for the adaptive management of the CGIAR Research Programs and a complement to the CGIAR management system. With few adaptations, the monitoring system can be useful for a wide range of organizations, including donors, development agencies and NGOs.

The next activity of the RTB – ILAC partnership will be the development of procedures to monitor how the research networks change over time

ILAC has produced a full report of the study, and also a Brief, with more condensed information.

·         Full report: Ekboir, J., Canto, G.B. and Sette, C. (2013) Monitoring the composition and evolution of the research networks of the CGIAR Research Program on Roots, Tubers and Bananas (RTB). Series on Monitoring Research Networks No. 01. Rome, Institutional Learning and Change (ILAC) Initiative

·         Brief: Ekboir, J., Canto, G.B. and Sette, C. (2013) Monitoring the composition and evolution of the research networks of the CGIAR Research Program on Roots, Tubers and Bananas (RTB). ILAC Brief No. 27. Rome, Institutional Learning and Change (ILAC) Initiative”

Real Time Monitoring for the Most Vulnerable

Greeley, M., Lucas, H. and Chai, J. IDS Bulletin 44.2
Editor Greeley, M. Lucas, H. and Chai, J. Publisher IDS

Purchase a print copy here.

View abstracts online and subscribe to the IDS Bulletin.

Growth in the use of real time digital information for monitoring has been rapid in developing countries across all the social sectors, and in the health sector has been remarkable. Commonly these Real Time Monitoring (RTM) initiatives involve partnerships between the state, civil society, donors and the private sector. There are differences between partners in understanding of objectives,and divergence occurs due to adoption of specific technology-driven approaches and because profit-making is sometimes part of the equation.

With the swarming, especially of pilot mHealth initiatives, in many countries there is risk of chaotic disconnects, of confrontation between rights and profits, and ofoverall failure to encourage appropriate alliances to build sustainable and effective national RTM systems. What is needed is a country-led process for strengthening the quality and equity sensitivity of real-time monitoring initiatives. We propose the development of an effective learning and action agenda centred on the adoption of common standards.

IDS, commissioned and guided by UNICEF Division of Policy and Strategy, has carriedout a multi-country assessment of initiatives that collect high frequency and/or time-sensitive data on risk, vulnerability and access to services among vulnerable children and populations and on the stability and security of livelihoods affected by shocks. The study, entitled Real Time Monitoring for the Most Vulnerable (RTMMV), began with a desk review of existing RTMinitiatives and was followed up with seven country studies (Bangladesh, Brazil,Romania, Senegal, Uganda, Vietnam and Yemen) that further explored and assessed promising initiatives through field-based review and interactive stakeholder workshops. This IDS Bulletin brings together key findings from this research.”

See full list of papers on this topic at the IDS Bulletin

Sustainable development: A review of monitoring initiatives in agriculture

(from DFID website)

A new report has just been released on the Review of the Evidence on Indicators, Metrics and Monitoring Systems. Led by the World Agroforestry Centre (ICRAF) under the auspices of the CGIAR Research Program on Water, Land and Ecosystem (WLE), the review examined monitoring initiatives related to the sustainable intensification of agriculture. Designed to inform future DFID research investments, the review assessed both biophysical and socioeconomic related monitoring efforts.

With the aim of generating insights to improve such systems, the report focuses upon key questions facing stakeholders today:

  1. How to evaluate alternative research and development strategies in terms of their potential impact on productivity, environmental services and welfare goals, including trade-offs among these goals?
  2. How to cost-effectively measure and monitor actual effectiveness of interventions and general progress towards achieving sustainable development objectives?

An over-riding lesson, outlined in the report, was the surprising lack of evidence for the impact of monitoring initiatives on decision-making and management. Thus, there are important opportunities for increasing the returns on these investments by better integrating monitoring systems with development decision processes and thereby increasing impacts on development outcomes. The report outlines a set of recommendations for good practice in monitoring initiatives…

DFID welcomes the publication of this review. The complexity of the challenges which face decision makers aiming to enhance global food security is such that evidence (i.e. metrics) of what is working and what is not is essential. This review highlights an apparent disconnection between what is measured and what is required by decision-makers. It also identifies opportunities for a way forward. Progress will require global co-operation to ensure that relevant data are collected and made easily accessible.

DFID is currently working with G8 colleagues on the planning for an international conference on Open Data to be held in Washington DC from 28th to 30th April 2013. The topline goal for the initiative is to obtain commitment and action from nations and relevant stakeholders to promote policies and invest in projects that open access to publicly funded global agriculturally relevant data streams, making such data readily accessible to users in Africa and world-wide, and ultimately supporting a sustainable increase in food security in developed and developing countries. Examples of the innovative use of data which is already easily available will be presented, as well as more in-depth talks and discussion on data availability, demand for data from Africa and on technical issues. Data in this context ranges from the level of the genome through the level of yields on farm to data on global food systems.


Do we need more attention to monitoring relative to evaluation?

This post title was prompted by my reading of Daniel Ticehurst’s paper (below), and some of my reading of literature on complexity theory and on data mining.

First, Daniel’s paper: Who is listening to whom, and how well and with what effect?   Daniel Ticehurst, October 16th, 2012. 34 pages


“I am a so called Monitoring and Evaluation (M&E) specialist although, as this paper hopefully reveals, my passion is monitoring. Hence I dislike the collective term ‘M&E’. I see them as very different things. I also dislike the setting up of Monitoring and especially Evaluation units on development aid programmes: the skills and processes necessary for good monitoring should be an integral part of management; and evaluation should be seen as a different function. I often find that ‘M&E’ experts, driven by donor insistence on their presence backed up by so-called evaluation departments with, interestingly, no equivalent structure, function or capacity for monitoring, over-complicate the already challenging task of managing development programmes. The work of a monitoring specialist, to avoid contradicting myself, is to help instil an understanding of the scope of what a good monitoring process looks like. Based on this, it is to support those responsible for managing programmes to work together in following this process through so as to drive better, not just comment on, performance.”

“I have spent most of my 20 years in development aid working on long term assignments mainly in various countries in Africa and exclusively on ‘M&E’ across the agriculture and private sector development sectors hoping to become a decent consultant. Of course, just because I have done nothing else but ‘M&E.’ does not mean I excel at both. However, it has meant that I have had opportunities to make mistakes and learn from them and the work of others. I make reference to the work of others throughout this paper from which I have learnt and continue to learn a great deal.”

“The purpose of this paper is to stimulate debate on what makes for good monitoring. It  draws on my reading of history and perceptions of current practice, in the development aid and a bit in the corporate sectors. I dwell on the history deliberately as it throws up some good practice, thus relevant lessons and, with these in mind, pass some comment on current practice and thinking. This is particularly instructive regarding the resurgence of the aid industry’s focus on results and recent claims about how there is scant experience in involving intended beneficiaries and establishing feedback loops, in the agricultural sector anyway.The main audience I have in mind are not those associated with managing or carrying out evaluations. Rather, this paper seeks to highlight particular actions I hope will be useful to managers responsible for monitoring (be they directors in Ministries, managers in consulting companies, NGOs or civil servants in donor agencies who oversee programme implementation) and will improve a neglected area.”

 Rick Davies comment: Complexity theory writers seem to give considerable emphasis to the idea of constant  change and substantial unpredictability of complex adaptive systems (e.g. most human societies). Yet surprisingly enough we find more writings on complexity and evaluation than we do on complexity and monitoring.  For a very crude bit of evidence compare Google searches for “monitoring and complexity  -evaluation” and “evaluation and complexity -monitoring”. There are literally twice as many search results for the second search string. This imbalance is strange because monitoring typically happens more frequently and looks at smaller units of time, than evaluation. You would think its use would be more suited to complex projects and settings.  Is this because we have not had in the past the necessary analytic tools to make best use of monitoring data? Is it also because the audiences for any use of the data have been quite small, limited perhaps to the implementing agency, their donor(s) and the intended beneficiaries at best? The latter should not longer be the case, given the global movement for greater transparency in the operations of aid programs, aided by continually widening internet access. In addition to the wide range of statistical tools suitable for hypothesis testing (generally under-utilised, even in their simplest forms e.g. chi-square tests) there are now a range of data mining tools that are useful for more inductive pattern finding purposes. (Dare I say it, but…) These are already in widespread use by big businesses to understanding and predict their customers behaviors (e.g. their purchasing decisions). The analytic tools are there, and available in in free open source forms (e.g. RapidMiner)

Integrated Monitoring: A Practical Manual for Organisations That Want to Achieve Results

Written by Sonia Herrero, InProgress, Berlin, April 2012. 43 pages Available as pdf

“The aim of this manual is to help those working in the non-profit sector — non-governmental organisations (NGOs) and other civil society organisations (CSOs) — and the donors which fund them, to observe more accurately what they are achieving through their efforts and to ensure  that they make a positive difference in the lives of the people they want to help. Our interest in writing this guide has grown out of the desire to help bring some conceptual clarity to
the concepts of monitoring and to determine ways in which they can be harnessed and used more effectively by non-profit practitioners.

The goal is to help organisations build monitoring and evaluation into all your project management efforts. We want to demystify the monitoring process and make it as simple and accessible as possible. We have made a conscious choice to avoid technical language, and instead use images and analogies that are easier to grasp. There is a glossary at the end of the manual which contains the definitions of any terms you may be unfamiliar with. This manual is organised into two parts. The first section  covers the ‘what’ and ‘why’ of monitoring and  evaluation; the second addresses how to do it.”

These materials may be freely used and copied by non-profit organisations for capacity building purposes, provided that inProgress and authorship are acknowledged. They may not be reproduced for commercial gain.

1. What is Monitoring?
2. Why Do We Monitor and For Whom?
3. Who is Involved?
4. How Does it Work?
5. When Do We Monitor?
5. What Do We Monitor?
5.1 Monitoring What We DoII. HOW DO WE MONITOR?
1. Steps for Setting Up a Monitoring S   2. How to Monitor the Process and the Outputs
3. How to Monitor the Achievemen 3.1 Define Results/Outcomes
3.2 Define Indicators for Results
4. Prepare a Detailed Monitoring Plan
5. Identify Sources of Information
6. Data Collection
6.1 Tools for Data Compilation
7. Reflection and Analysis
7.1 Documenting and Sharing
8. Learning and Reviewing
8.1 Learning
8.2 Reviewing
9. Evaluation

Monitoring Policy Dialogue: Lessons From A Pilot Study

By Sadie Watson And Juliet Pierce. September 2008. DEPARTMENT FOR INTERNATIONAL DEVELOPMENT. Evaluation Report WP27

Executive Summary

In 2007, a tool and process was developed for improving the recording and impact of policy dialogue initiatives across DFID. It was based on an adaptation of current project cycle management (PCM) requirements for programme spending. A pilot was devised to test the proposed tool and process in terms of:

• Assessing the value in recording and monitoring policy related activities in a similar way to that of spend activities;

• Finding the most effective and useful approach in terms of process;

• Identifying succinct ways to capture intentions and to measuring performance;

• Clarifying the type and level of support and guidance required to roll the process out across DFID.

The ten participating pilot teams represented different aspects of DFID’s policy work, conducting different types of policy dialogue activities. The consultants were asked to monitor and evaluate the six month pilot. They were also asked to review approaches to managing and monitoring policy dialogue and influencing activities in other organisations. This report highlights some lessons and observations from the pilot. It outlines some emerging issues and provides some pointers for DFID to consider as it continues to develop into an organisation where policy dialogue and influencing are increasingly important aid tools.
Continue reading “Monitoring Policy Dialogue: Lessons From A Pilot Study”

Can we obtain the required rigour without randomisation? Oxfam GB’s non-experimental Global Performance Framework

Karl Hughes, Claire Hutchings, August 2011. 3ie Working Paper 13. Available as pdf.

[found courtesy of @3ieNews]


“Non-governmental organisations (NGOs) operating in the international development sector need credible, reliable feedback on whether their interventions are making a meaningful difference but they struggle with how they can practically access it. Impact evaluation is research and, like all credible research, it takes time, resources, and expertise to do well, and – despite being under increasing pressure – most NGOs are not set up to rigorously evaluate the bulk of their work. Moreover, many in the sector continue to believe that capturing and tracking data on impact/outcome indicators from only the intervention group is sufficient to understand and demonstrate impact. A number of NGOs have even turned to global outcome indicator tracking as a way of responding to the effectiveness challenge. Unfortunately, this strategy is doomed from the start, given that there are typically a myriad of factors that affect outcome level change. Oxfam GB, however, is pursuing an alternative way of operationalising global indicators. Closing and sufficiently mature projects are being randomly selected each year among six indicator categories and then evaluated, including the extent each has promoted change in relation to a particular global outcome indicator. The approach taken differs depending on the nature of the project. Community-based interventions, for instance, are being evaluated by comparing data collected from both intervention and comparison populations, coupled with the application of statistical methods to control for observable differences between them. A qualitative causal inference method known as process tracing, on the other hand, is being used to assess the effectiveness of the organisation’s advocacy and popular mobilisation interventions. However, recognising that such an approach may not be feasible for all organisations, in addition to Oxfam GB’s desire to pursue complementary strategies, this paper also sets out several other realistic options available to NGOs to step up their game in understanding and demonstrating their impact. These include: 1) partnering with research institutions to rigorously evaluate “strategic” interventions; 2) pursuing more evidence informed programming; 3) using what evaluation resources they do have more effectively; and 4) making modest investments in additional impact evaluation capacity.”

Reflexive Monitoring in Action: A guide for monitoring system innovation projects

“Researchers at Wageningen University and the VU University Amsterdam, the Netherlands, have been working together on a type of monitoring that they have called reflexive monitoring in action (RMA).  RMA has been developed especially for projects that aim to contribute to the sustainable development of a sector or region by working on system innovation.   Sustainable development demands simultaneous changes at many levels of society and in multiple domains: ecological, economic, political and scientific. It requires choices to be made that are radically different from the usual practices, habits, interrelationships and institutional structures. But that is precisely why it is not easy. System innovation projects therefore benefit from a type of monitoring that encourages the ‘reflexivity’ of the project itself, its ability to affect and interact with the environment within which it operates. If a project wants to realise the far-reaching ambitions of system innovation, then reflection and learning must be tightly interwoven within it. And that learning should focus on structural changes. RMA can contribute to this.   In the guide, -aiming at supporting the work of project managers, monitors and clients-, the authors present the characteristics and the value of Reflexive Monitoring in Action, together with practical guidelines that will help put that monitoring into practice. At the end of the guide the authors provide detailed descriptions of seven monitoring tools.”

The guide can be freely downloaded in pdf format, in English or Dutch, from or

The guide is also available in printed version (Dutch only), through Boxpress ( Price: € 49,95 (full colour) or € 29,95 (black-white with pictures in full colour). For more information please contact:

Negotiated Learning: Collaborative Monitoring for Forest Resource Management

(via Pelican email list)

Dear all

Niels has asked me to make you aware of a new publication that some
‘Pelican-ers’ might find relevant.

I have edited a book on how learning and monitoring can become better
‘friends’ than is currently usually the case. The book comes off the press
tomorrow. The full reference: Guijt, Irene, ed. (2007). Negotiated
Learning: Collaborative Monitoring for Forest Resource Management
Washington DC, Resources for the Future/Center for International Forestry
Research. Although the cases in the book focus on natural resource (forest)
management, the issues about how to create genuine learning through the
construction, negotiation and implementation of a monitoring process will
have much wider relevance.

Full details on how to obtain the book can be found at : ,
where the book is described as follows :

“The first book to critically examine how monitoring can be an effective
tool in participatory resource management, Negotiated Learning draws on the
first-hand experiences of researchers and development professionals in
eleven countries in Africa, Asia, and South America. Collective monitoring
shifts the emphasis of development and conservation professionals from
externally defined programs to a locally relevant process. It focuses on
community participation in the selection of the indicators to be monitored
as well as in the learning and application of knowledge from the data that
are collected. As with other aspects of collaborative management,
collaborative monitoring emphasizes building local capacity so that
communities can gradually assume full responsibility for the management of
their resources. The cases in Negotiated Learning highlight best practices
but stress that collaborative monitoring is a relatively new area of theory
and practice. The cases focus on four themes: the
challenge of data-driven monitoring in forest systems that supply multiple
products and serve diverse functions and stakeholders; the importance of
building upon existing dialogue and learning systems; the need to better
understand social and political differences among local users and other
stakeholders; and the need to ensure the continuing adaptiveness of
monitoring systems.”

PS: Links to full texts of some chapters






Learning by Design

Bredeweg 31, 6668 AR Randwijk, The Netherlands
Tel. (0031) 488-491880 Fax. (0031) 488-491844