This posting is overdue. The Donor Committee for Enterprise Development (DCED) has been producing a lot of material on results management this year. Here are some of the items I have seen.
- Practical Guidelines for Measuring Results in Women’s Economic Empowerment in Private Sector Development, 2014
- Assessing Systemic Change, a guide on assessing systemic change, 2014
- Measuring Job Creation in Private Sector Development, 2014
- Demonstrating Additionality in Private Sector Development Initiatives -A Practical Exploration of Good Practice
for Challenge Funds and Other Cost-Sharing Mechanisms, 2014
Of particular interest to me is the DCED Standard for Results Measurement. According to Jim Tanburn, there are about 60-70 programmes now using the standard. Associated with this is an auditing service offered by DCED. From what I can see nine programmes have been audited so far. Given the scale and complexity of the standards, the question in my mind, and probably that of others, is whether their use makes a significant difference to the performance of the programmes that have implemented the standards.Are they cost-effective?
This would not be an easy question to answer in any rigorous fashion, I suspect. There are likely to be many case-specific accounts of where and how the standards have helped improve performance, and perhaps some of where they have have not helped or even hindered. Some accounts are already available via the Voices from the Practitioners part of the DCED website.
The challenge would be how to aggregate judgements about impacts on a diverse range of programmes in a variety of settings. This is the sort of situation where one is looking for the “effects of a cause”, rather than “the causes of an effect”, because there is a standard intervention (adoption of the standards) but it is one which may have many different effects. A three step process might be feasible, or at least worth exploring:
1. Rank programmes in terms of the degree to which they have successfully adopted the standards. This should be relatively easy, given that there is a standard auditing process
2. Rank programmes in terms of the relative observed/reported effects of the standards. This will be much more difficult because of the apple and pears nature of the impacts. But I have been exploring a way of doing so here: Pair comparisons: For where there is no common outcome measure? Another difficulty, which may be surmountable, is that “all the audit reports remain confidential and DCED will not share the contents of the audit report without seeking permission from the audited programmes”.
3. Look for the strength and direction of the correlation between the two measures, and for outliers (poor adoption/big effects, good adoption/few effects) where lessons could be learned.