Do we need more attention to monitoring relative to evaluation?

This post title was prompted by my reading of Daniel Ticehurst’s paper (below), and some of my reading of literature on complexity theory and on data mining.

First, Daniel’s paper: Who is listening to whom, and how well and with what effect?   Daniel Ticehurst, October 16th, 2012. 34 pages

Abstract:

“I am a so called Monitoring and Evaluation (M&E) specialist although, as this paper hopefully reveals, my passion is monitoring. Hence I dislike the collective term ‘M&E’. I see them as very different things. I also dislike the setting up of Monitoring and especially Evaluation units on development aid programmes: the skills and processes necessary for good monitoring should be an integral part of management; and evaluation should be seen as a different function. I often find that ‘M&E’ experts, driven by donor insistence on their presence backed up by so-called evaluation departments with, interestingly, no equivalent structure, function or capacity for monitoring, over-complicate the already challenging task of managing development programmes. The work of a monitoring specialist, to avoid contradicting myself, is to help instil an understanding of the scope of what a good monitoring process looks like. Based on this, it is to support those responsible for managing programmes to work together in following this process through so as to drive better, not just comment on, performance.”

“I have spent most of my 20 years in development aid working on long term assignments mainly in various countries in Africa and exclusively on ‘M&E’ across the agriculture and private sector development sectors hoping to become a decent consultant. Of course, just because I have done nothing else but ‘M&E.’ does not mean I excel at both. However, it has meant that I have had opportunities to make mistakes and learn from them and the work of others. I make reference to the work of others throughout this paper from which I have learnt and continue to learn a great deal.”

“The purpose of this paper is to stimulate debate on what makes for good monitoring. It  draws on my reading of history and perceptions of current practice, in the development aid and a bit in the corporate sectors. I dwell on the history deliberately as it throws up some good practice, thus relevant lessons and, with these in mind, pass some comment on current practice and thinking. This is particularly instructive regarding the resurgence of the aid industry’s focus on results and recent claims about how there is scant experience in involving intended beneficiaries and establishing feedback loops, in the agricultural sector anyway.The main audience I have in mind are not those associated with managing or carrying out evaluations. Rather, this paper seeks to highlight particular actions I hope will be useful to managers responsible for monitoring (be they directors in Ministries, managers in consulting companies, NGOs or civil servants in donor agencies who oversee programme implementation) and will improve a neglected area.”

 Rick Davies comment: Complexity theory writers seem to give considerable emphasis to the idea of constant  change and substantial unpredictability of complex adaptive systems (e.g. most human societies). Yet surprisingly enough we find more writings on complexity and evaluation than we do on complexity and monitoring.  For a very crude bit of evidence compare Google searches for “monitoring and complexity  -evaluation” and “evaluation and complexity -monitoring”. There are literally twice as many search results for the second search string. This imbalance is strange because monitoring typically happens more frequently and looks at smaller units of time, than evaluation. You would think its use would be more suited to complex projects and settings.  Is this because we have not had in the past the necessary analytic tools to make best use of monitoring data? Is it also because the audiences for any use of the data have been quite small, limited perhaps to the implementing agency, their donor(s) and the intended beneficiaries at best? The latter should not longer be the case, given the global movement for greater transparency in the operations of aid programs, aided by continually widening internet access. In addition to the wide range of statistical tools suitable for hypothesis testing (generally under-utilised, even in their simplest forms e.g. chi-square tests) there are now a range of data mining tools that are useful for more inductive pattern finding purposes. (Dare I say it, but…) These are already in widespread use by big businesses to understanding and predict their customers behaviors (e.g. their purchasing decisions). The analytic tools are there, and available in in free open source forms (e.g. RapidMiner)
%d bloggers like this: