Do we need more attention to monitoring relative to evaluation?

This post title was prompted by my reading of Daniel Ticehurst’s paper (below), and some of my reading of literature on complexity theory and on data mining.

First, Daniel’s paper: Who is listening to whom, and how well and with what effect?   Daniel Ticehurst, October 16th, 2012. 34 pages

Abstract:

“I am a so called Monitoring and Evaluation (M&E) specialist although, as this paper hopefully reveals, my passion is monitoring. Hence I dislike the collective term ‘M&E’. I see them as very different things. I also dislike the setting up of Monitoring and especially Evaluation units on development aid programmes: the skills and processes necessary for good monitoring should be an integral part of management; and evaluation should be seen as a different function. I often find that ‘M&E’ experts, driven by donor insistence on their presence backed up by so-called evaluation departments with, interestingly, no equivalent structure, function or capacity for monitoring, over-complicate the already challenging task of managing development programmes. The work of a monitoring specialist, to avoid contradicting myself, is to help instil an understanding of the scope of what a good monitoring process looks like. Based on this, it is to support those responsible for managing programmes to work together in following this process through so as to drive better, not just comment on, performance.”

“I have spent most of my 20 years in development aid working on long term assignments mainly in various countries in Africa and exclusively on ‘M&E’ across the agriculture and private sector development sectors hoping to become a decent consultant. Of course, just because I have done nothing else but ‘M&E.’ does not mean I excel at both. However, it has meant that I have had opportunities to make mistakes and learn from them and the work of others. I make reference to the work of others throughout this paper from which I have learnt and continue to learn a great deal.”

“The purpose of this paper is to stimulate debate on what makes for good monitoring. It  draws on my reading of history and perceptions of current practice, in the development aid and a bit in the corporate sectors. I dwell on the history deliberately as it throws up some good practice, thus relevant lessons and, with these in mind, pass some comment on current practice and thinking. This is particularly instructive regarding the resurgence of the aid industry’s focus on results and recent claims about how there is scant experience in involving intended beneficiaries and establishing feedback loops, in the agricultural sector anyway.The main audience I have in mind are not those associated with managing or carrying out evaluations. Rather, this paper seeks to highlight particular actions I hope will be useful to managers responsible for monitoring (be they directors in Ministries, managers in consulting companies, NGOs or civil servants in donor agencies who oversee programme implementation) and will improve a neglected area.”

 Rick Davies comment: Complexity theory writers seem to give considerable emphasis to the idea of constant  change and substantial unpredictability of complex adaptive systems (e.g. most human societies). Yet surprisingly enough we find more writings on complexity and evaluation than we do on complexity and monitoring.  For a very crude bit of evidence compare Google searches for “monitoring and complexity  -evaluation” and “evaluation and complexity -monitoring”. There are literally twice as many search results for the second search string. This imbalance is strange because monitoring typically happens more frequently and looks at smaller units of time, than evaluation. You would think its use would be more suited to complex projects and settings.  Is this because we have not had in the past the necessary analytic tools to make best use of monitoring data? Is it also because the audiences for any use of the data have been quite small, limited perhaps to the implementing agency, their donor(s) and the intended beneficiaries at best? The latter should not longer be the case, given the global movement for greater transparency in the operations of aid programs, aided by continually widening internet access. In addition to the wide range of statistical tools suitable for hypothesis testing (generally under-utilised, even in their simplest forms e.g. chi-square tests) there are now a range of data mining tools that are useful for more inductive pattern finding purposes. (Dare I say it, but…) These are already in widespread use by big businesses to understanding and predict their customers behaviors (e.g. their purchasing decisions). The analytic tools are there, and available in in free open source forms (e.g. RapidMiner)

11 thoughts on “Do we need more attention to monitoring relative to evaluation?”

  1. In the context of development interventions, regardless of the source of funding (i.e. international donor or government agency), monitoring essentially captures data on performance. Performance, as we know, must be (at least) measurable and attributable to agency efforts to make sure that development interventions do happen or being realized as planned, and do occur on time (i.e. within the development intervention’s time frame). The best thing about combining monitoring and evaluation is that when the “monitoring side” is able to record that these development interventions are not happening, or maybe happening but are beyond what had been targeted, then the “evaluation side” is used to find reasons for such. While the “monitoring side” requires 80% to 90% of the work for M&E Specialists, the remaining minimum 10% for the “evaluation side” is very critical to keep the development intervention “in the right shape, focus and direction” as planned. It’s just that the allotted time for evaluation can increase to as much as 20% of the M&E Specialist’s time if the development intervention requires re-tooling of its strategies to maintain it on the right path to achieve the development hypothesis. Take note that the “evaluation side” must take its course anytime especially if targets are not met, or if such targets are met over and beyond the target.

  2. From my over 20 years experience, I conclude that monitoring is a more important function than evaluation. We execute expensive evaluation but what ever mistakes or weaknesses have gone into a program cannot be undone. Next program will have different environment and human tends to forget. Practically most of evaluation reports go into cupboards. Advantage is not cost effective.A powerful monitoring can be more beneficial.

  3. I agree with the abstract. It is very challenging in practice. Also the skills required for monitoring is different from evaluation. If M&E advisers are able to clearly differentiate between M and E, thenI do not mind the job titles.

  4. Of course, each component of the PMEL (planning, monitoring evaluation and learning) process has its own importance and viewing them separately under a complex developmental context is surely helpful. The relative importance of Monitoring over Evaluation is clear from the fact that the former is an integral part of the programme implementation process, while Evaluation is mostly viewed as a third eye-view produced by people from outside. For the decision making process (and for the Managers) learning the “whole truth” is necessary. However, their reliance on monitoring occupies the most time 90:10; while only ten percent for evaluation. Unfortunately though, this ratio does not reflect the true ‘effort’ levels dedicated by the programme management. Doesn’t the abbreviation PMEL signifies the order by which the actions are initiated? If one likes, one can reverse the order and abbreviate it as LEMP – signifying the order of relative value of the actions to a manager?

    I think Daniel Ticehurst’s experience matches mine in that the larger aid programmes, with a rather sloppy “M&E” system, tend to relegate much of the analytical responsibilities over to the external evaluators. Then it often turns out to be too bitter a pill left behind by the “irresponsible” evaluators -to digest.

  5. I do agree with what you are mentioning in this document and it is strong wealthy that you can do corrective measure while the things in carried out.The project may not be in a place where you do the program to check it much as possible.

    Tinsae Dubale P.O.Box 124 Wolaita Soddo Ethiopia
    Wolaita Kale Heywot Church- Terpeza Development Association

    PPME Department Head

  6. In light of Rick Davies comments to the abstract: Do you all think that the emphasis will shift toward monitoring when more highly developed, user-friendly applications become readily available with limited resource requirements? I mean, you can’t exactly custom-build a data-mining structure and operational dashboards on open-source software of you have limited resources and lack of trained staff, right?

  7. It is great to see some other people are starting to look into this as well. I am currently working on some similar research into the dyanamics between monitoring and evaluation. Like the post above states, I also dislike the combined term as there is clear directional focus (if not a bias) towards evaluation. As part of my research I have built up some reasonable evidence to support this view. Without going into detail here is the reality that evaluation are just an option but monitoring is not. If we get our monitoring right, then evaluations – if needed – will be much easier, cheaper and more informative.

  8. Frankly speaking, I am still green about M&E. Nevertheless based on my nearly one year of work experience as an M&E Associate in an interatona agency in Timor-Leste, I tend to agree with the idea of Mr. Daniel’s. Though they are inter-linked, monitoring should be highly enphasized as it is in the monitoring activities that results, challeges and early lesons are identified. Evaluators do their works after somethng already happen which is sometimes too late to fix particularly when it comes to higher level programmes. Havin said this, I would like to ask if someone could tell me what types of project or programme that is subject to internal or external evaluation, i mean in terms of nominal value? is it above USD $ 10,000, 50,0000 or ? and who decides it, donor or recepient or community as the last beneficiary?
    Many thanks
    Herdade

  9. Hi Herdade, in response to your question about who ‘decides’ on the type of evaluation – this should not be based, in general, on some generic budget figure like USD50,000. Evaluations should be determined on what the intended outcomes are for the project. However, having said that, it is logical that a smaller funded project would, most likely, have a more condensed evaluation due to having a smaller scale. Some projects may even not need an evaluation at all – if the achievement and lessons learnt can be picked up during the project and then summarised.

    In the end, the planning for an evaluation should be set out and reasoned in your M+E plan. This plan should involve all stakeholders and participant representatives to determine if an evaluation is needed, and if so, what type of evaluation it will be etc. Key decisions on evaluations should be made at the start of the project, rather than waiting at the end – such as how the monitoring will link and support any planned evaluation.
    Regards, Murray

Comments?

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: