A Review of Umbrella Fund Evaluation – Focusing on Challenge Funds

This is a Specialist Evaluation and Quality Assurance Service – Service Request Report, authored by Lydia Richardson with David Smith and Charlotte Blundy of TripleLine, in October 2015. A pdf copy is available

As this report points out, Challenge Funds are common means of funding development aid projects but they have not received the evaluation attention they deserve.  In this TripleLine study the authors collated information 56 such funds. .”One of the key findings was that of the 56 funds, only 11 (19.6%) had a document entitled „impact assessment?, of these 7 have been published. Looking through these, only one (Chars Livelihood Programme) appears to be close to DFID?s definition of impact evaluation, although this programme is not considered to be a true challenge fund according to the definition outlined in the introduction. The others assess impact but do not necessarily fit DFID?s 2015 definition of impact evaluation”

Also noted later in the text:… “An email request for information on evaluation of challenge funds was sent to fund and evaluation managers. This resulted in just two responses from 11 different organisations. This verifies the finding that there is very little evaluation of challenge funds available in the public domain”….”Evaluation was in most cases not incorporated into the fund’s design”.

“This brief report focuses on the extent to which challenge funds are evaluable. It unpacks definitions of the core terms used and provides some analysis and guidance for those commissioning evaluations. The guidance is also relevant for those involved in designing and managing challenge funds to make them more evaluable”

Contents:
1. Introduction
1 2. Methods used
1 2.1 Limitations of the review
2 3. Summary of findings of the scoping phase
2 3.1 Understanding evaluability
4 3.2 Typology for DFID Evaluations
4 4. Understanding the challenge fund aid modality
5 4.1 Understanding the roles and responsibilities in the challenge fund model
5 4.2 Understanding the audiences and purpose of the evaluation
6 4.3 Aligning the design of the evaluation to the design of the challenge fund
7 5. What evaluation questions are applicable?
8 5.1 Relevance
9 5.2 Efficiency
9 5.3 Effectiveness
10 5.4 Impact
10 5.5 Sustainability
12 6. The rigour and appropriateness of challenge fund evaluations
13 6.1 The use of theory of change.
13 6.2 Is a theory based evaluation relevant and possible?
13 6.3 Measuring the counterfactual and assessing attribution
14 6.4 The evaluation process and institutional arrangements
16 6.5 Multi-donor funds
16 6.6 Who is involved?
16 7. How data can be aggregated
17 8. Working in fragile and conflict affected states.
18 9. Trends
18 10. Gaps
20 11. Conclusions

Rick Davies Comment: .While projects funded by Challenge Funds are often evaluated, sometimes as a requirement of their funding, it seems that the project selection and funding process itself is is not given the same level of scrutiny. By this I mean the process whereby candidate projects are screened, assessed and then chosen or not, for funding. This process involves consideration of multiple criteria, including adherence to legal requirements, strategic alignment and grantees capacity to implement activities and achieve objectives. This is where the Challenge Fund Theory of Change actually gets operationalised. It should be possible to test the (tacit and explicit) theory being implemented at this point  by gathering data on the subsequent performance of the funded projects. There should be some form of consistent association between attributes of highly rated project proposals (versus lowly rated proposals) and the scale of their achievements when implemented. If there is not, then it suggests that the proposal screening process is not delivering value and that random choice would be cheaper and just as effective. One experience I have had of this kind of analysis was not very encouraging. We could not find any form of consistent association between project attributes noted during project selection and the scale of subsequent achievement. But perhaps with more comprehensive recording of data collected at the project assessment stage the analysis might have delivered more encouraging results…

PS: This report was done using 14 person days, which is a tight budget given the time need to collate data, let alone analyse it. A good report, especially considering these time constraints

Comments?

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: