ICAI Seeks Views on Revised Evaluation Framework

 

 “In our first report, ICAI’s Approach to Effectiveness and Value for Money,we set out an evaluation framework, consisting of 22 questions under 4 guiding criteria (objectives, delivery, impact and learning), to guide our lines of enquiry in reviews. In the light of our experience to date in carrying out our reports, we have reviewed this framework. The revised framework is available at this link: ICAI revised evaluation framework

We are now entering a period of consultation on the revised framework which will run until 24 May 2013. If you have any comments or views, please email enquiries@icai.independent.gov.uk  or post them to: The Secretariat, Independent Commission for Aid Impact, Dover House, 66 Whitehall, London SW1A 2AU”

Open consultation Triennial review of the Independent Commission for Aid Impact (ICAI)

(Website that hosts the text below)

This consultation closes on 26 April 2013

On 21 March the Government announced the triennial review of the Independent Commission for Aid Impact (ICAI) and is seeking views of stakeholders who wish to contribute to the Review. Triennial Reviews of Non-Departmental Public Bodies (NDPBs) are part of the Government’s commitment to review all NDPBs, with the aim of increasing accountability for actions carried out on behalf of the State.

The ICAI’s strategic aim is to provide independent scrutiny of UK aid spending, to promote the delivery of value for money for British taxpayers and to maximise the impact of aid.

The Review will be conducted in line with Cabinet Office principles and guidance, in two stages.

The first stage will:

  • Identify and examine the key functions of the ICAI and assess how these functions contribute to the core business of DFID;
  • Assess the requirement for these functions to continue given other scrutiny processes;
  • If continuing, assess how the key functions might best be delivered; if one of these options is continuing delivery through the ICAI, then make an assessment against the Government’s “three tests”: technical function; political impartiality; and the need for independence from Ministers.

If the outcome of stage one is that delivery should continue through the ICAI, the second stage of the review will:

  • Review whether ICAI is operating in line with the recognised principles of good corporate governance, using the Cabinet Office “comply or explain” standard approach.

In support of these aims we would welcome input and evidence from stakeholders, focused on these main questions:

ICAI’s functions

For the purposes of this review, we have defined ICAI’s key functions as follows:

  • Produce a wide range of independent, high quality/professionally credible and accessible reports (including evaluations, VfM reviews, investigations) setting out evidence of the impact and value for money of UK development efforts;
  • Work with and for Parliament to help hold the UK Government to account for its development programme, and make information on this programme available to the public;
  • Produce appropriately targeted recommendations to be implemented/ followed up by HMG.

Which of these functions do you think are still needed? What would be the impact if ICAI ceased to exist?

Would you define ICAI’s functions differently?

Do you think any of the following delivery mechanisms would be more appropriate or cost effective at delivering these functions: Local government, the voluntary sector, private sector, another existing body or DFID itself?

To date, do you think ICAI has focused on scrutinising UK aid spend or the wider HMG development effort? What do you think it should be doing?

Where do you think ICAI sits on the spectrum between audit and research? Is this where they should be?

How far can and should ICAI have a role in holding HMG to account?

Production of reports

What is the quality of ICAI reports? Is the expertise of those producing the reports appropriate? How does this compare to other scrutiny bodies that you know of?

How far does the methodology used by ICAI add value to other scrutiny of DFID programmes (eg IDC, NAO, DFID internal)?

How far does ICAI involve beneficiaries in its work?

What impact have ICAI reviews had on DFID staff and resources?

How independent do you believe ICAI is? How important do you think this independence is for ICAI’s ability to deliver its functions effectively?

How much of an impact do you think the Commissioners have on ICAI recommendations and reports? What added value do you think they bring? Do they have the right skillset?

Making information available to the IDC and the public

How important do you think ICAI’s role is in providing information about UK development to taxpayers?

What impact has ICAI had on public perceptions of UK development?

Production of targeted recommendations

What has been the added value of ICAI’s recommendations? How do these compare to other scrutiny bodies that you know of?

How far and why have recommendations been followed up?

What impact has ICAI had on DFID’s own approach to monitoring impact and value for money?

How far has ICAI promoted lesson learning in DFID?

General

Do you think ICAI could improve? If so, how do you think ICAI could improve?

Do you have any other comments?

The government is seeking views of stakeholders on the Triennial Review of the Independent Commission for Aid Impact (ICAI).

Contact us by 26 April 2013

Write to us:

email
post
ICAI Review Team KB 2.2
22 Whitehall
London
SW1A 2EG

AusAID’s Information Publication Scheme: Draft Plan & Consultation

The 12th April 2011 Draft plan is now available in pdf and MS Word

Introduction

“AusAID is the Australian Government’s Agency for International Development, an executive agency within the Department of Foreign Affairs and Trade portfolio. Its primary role is the implementation and oversight of the Australian Government aid program. The aim of the program is to assist
developing countries reduce poverty and achieve sustainable development, in line with Australia’s national interest.

Reforms to the Freedom of Information Act 1982 (FOI Act) have established the Information Publication Scheme (IPS). The purpose of the IPS is to give the Australian community access to information held by the Australian Government and enhance and promote Australia’s representative
democracy by increasing public participation in government processes and increasing scrutiny, discussion, comment and review of government activities and decisions.

AusAID is committed to greater transparency through the implementation of the Information Publication Scheme (IPS) and other initiatives that will introduced. As Australia’s ODA commitment has increased, public interest in the aid program has correspondingly increased and this will
continue. Implementation of the IPS will provide more information to Australians about AusAID’s activities and help increase public participation understanding and scrutiny of Australia’s aid program.

This draft plan has been prepared to assist AusAID implement the IPS, in accordance with section 8(1) of the Freedom of Information Act (FOI) 1982 and to give the Australian public the opportunity to comment and provide feedback on this plan.

As AusAID’s final plan is implemented it will be progressively updated in light of experience and feedback. The list of documents that is a core part of this plan will, in particular, be amended.”

The consultation: Visit this AusAid website to see how to participate and to read the views of others who have already contributed.

 

DfID Seeks Suggestions for Implementing Aid Transparency Initiative

on Devex, By Eliza Villarino on 06 September 2010

“The U.K. Department for International Development launches an online discussion to seek input on how it should implement the UKaid Transparency Guarantee.

The U.K. Department for International Development has opened an online discussion to help it decide how to implement its aid transparency initiative.

The UKaid Transparency Guarantee forms part of the coalition government’s commitment to boost the transparency of DfID aid. As reported by Devex, U.K. Secretary of State for International Development Andrew Mitchell announced the guarantee, along with the intention to create an independent aid watchdog, in June.

DfID is urging civil society groups, think tanks and other organizations working on transparency to send an e-mail to aidtransparency@dfid.gov.uk if they wish to contribute to the discussion.”

PS – 19th October 2010: A summary of the online discussion is now available here as a pdf: 2010 Summary of Huddle Discussions on UKATG

New DFID policy on Evaluation

“DFID takes very seriously the responsibility to ensure high quality, independent evaluation of its programmes, to provide reliable and robust evidence to improve the value of its global work to reduce poverty.

In December 2007 the Independent Advisory Committee on Development Impact was established to help DFID strengthen its evaluation processes. The Committee is there to work with DFID to:

  • Determine which programmes and areas of UK development assistance will be evaluated and when;
  • Identify any gaps in the planned programme of evaluations and make proposals for new areas or other priorities as required;
  • Determine whether relevant standards (e.g. of the OECD Development Assistance Committee) are being applied; and comment on the overall quality of the programme of evaluation work carried out against these.

DFID and IACDI have therefore been working closely together to define a new policy which will set the course for evaluation in the future. We have also produced a ‘topic list’ of potential areas for evaluation over the coming 3 years. So you will see here two documents on which we would like your feedback, the Draft Evaluation Policy and the Evaluation Topic List.

Central to the policy is the emphasis on greater independence of evaluation, along with stronger partnership working, reflecting global commitments to harmonisation, decentralising evaluation to a greater degree, driving up quality, and ensuring that learning from evaluation contributes to future decision making. We would like you to consider those high level issues when offering your comment and feedback during the time the consultation process is open. This document does not focus on the operational issues; they will be considered in a separate DFID strategy document.

During the consultation period, we would also like to hear your views on which topics you consider to be the greatest priority and why. This will help DFID to make decisions on which are to be given the highest priority.

In summary the issues we are particularly keen for you to focus your feedback on are:

1. The definition of ‘independent evaluation’ – what are your thoughts on the policy approach of DFID, working increasingly with partners, to increase independence in evaluation?

2. What are your views on what’s required to drive up quality across the board in evaluation of international development programmes? What role do you think DFID can most valuably play in this?

3. What are the considerations for DFID strengthening its own evaluation processes, whilst ensuring its commitments to harmonisation remain steadfast?

4. DFID is determined to increase the value of learning from evaluation to inform policy – what are your thoughts on the means to bring this about?

5. DFID is committed to consulting stakeholders during our evaluations, including poor women and men affected by our programmes.   Getting representative stakeholders, especially for evaluations which go beyond specific projects and programmes, can often be challenging (for example evaluations of country assistance plans or thematic evaluations).  Do you have any ideas on how to improve this?

6. DFID is committed to developing evaluation capacity in partner countries and increasing our use of national systems. What are your thoughts on the challenges and ways forward?

Please send your feedback to evaluationfeedback@dfid.gov.uk . The public consultation will officially close on Tuesday 3rd March but we would appreciate comments as early as possible, so that they can be considered as the operational issues are further thought out.”

ParEvo – a web assisted participatory scenario planning process

The purpose of this page

…is to record some ongoing reflections on my experience of running two pre-tests of ParEvo carried out in late 2018 and early 2019.

Participants and others are encouraged to add their own comments, by using the Comment facility at the bottom of this page

Two pre-tests are underway

  • One involves 11 participants developing a scenario involving the establishment of an MSC (Most Significant Change) process in a development programme in Nigeria. These volunteers were found via the MSC email list. They came from 7 countries and 64% were women.
  • The other involves 11 participants developing a Brexit scenario following Britain failing to reach an agreement with the EU by March 2019. These participants were found via the MandE NEWS email list. They came from 9 countries and 46% were women.

For more background (especially if you have not been participating) see this 2008 post on the process design and this 2019 Conference abstract talking about these pre-tests

Reflections so far

Issues arising…

  1. How many participants should there be?
    • In the current pre-tests, I have limited the number to around 10. My concern is that with larger numbers there will be too many story segments (and their storylines) for people to scan and make a single preferred selection. But improved methods of visualising the text contributions may help overcome this limitation. Another option is to allow/encourage individual participants to represent teams of people, e.g. different stakeholder groups. I have not yet tried this out.
  2. Do the same participants need to be involved in each iteration of the process?
    1. My initial concern is that not doing so would make some of the follow up quantitative analysis more difficult, but I am not so concerned about that now, its a manageable problem. On the other hand, it is likely that some people will have to drop out mid-process, and ideally, they could be replaced by others, thus maintaining the diversity of storylines.
  3. How do you select an appropriate topic for a scenario planning exercise?
    1. Ideally, it would be a topic that was of interest to all the participants and one which they felt some confidence in talking about, even if only in terms of imagined futures. One pre-test topic, the use of MSC in Nigeria, was within these bounds. But the other was more debatable: the fate of the UK after no resolution of BREXIT terms by 29th March 2019
  4. How should you solicit responses from participants?
    1. I started by sending a standard email to all the (MSC scenario) participants, but this has been cumbersome and has risks. It is too easy to lose track of who contributed what text, to add to what existing storyline. I am now using two-part single question survey via SurveyMonkey. This enables me to keep a mistake-free record of who contributed what to what, and who has responded and who has not. But this still involves sending multiple communications, including reminders, and I have sometimes confused what I am sending to whom.  A more automated systems is definitely needed.
  5. How should you represent and share participants responses?
    1. This has been done in two forms. One is a tree diagram, showing all storylines, where participants can mouseover nodes to immediately see each text segment. Or they can click on each node to go to a separate web page and see complete storylines. These are both laborious to construct, but hopefully will soon be simplified and automated via some tech support which is now under discussion. PS: I have now resorted to only using the tree diagram with mouseover.
  6. Should all contributions be anonymous?
    1. There are two types of contributions: (a) the storyline segments contributed during each iteration of the process, (b) Comments made on these contributions, that can be enabled on the blog page that hosts each full storyline to date. This second type was an afterthought, whereas the first is central to the process.
    2. The first process of contributing to storylines designed to make authorship anonymous, so people would focus on the contents.  I think this remains a good feature.
    3. The second process of allowing people to comment has pros and cons. The advantage is that it can enrich the discussion process, providing a meta-level to the main discussion which is the storyline development. The risk, however, is that if the comments are not enabled to be anonymous then a careful reader of the comments can sometimes work out who made which storyline contributions. I have tried to make comments anonymous but they still seem to reveal the identity of the person making the comment. This may be resolvable. PS: This option is now not available, while I am only using the tree diagram to show storylines. This may need to be changed.
  7. How many iterations should be completed?
    1. It has been suggested that participants should know this in advance, so that their story segments don’t leap in the future too quickly, or the reverse, progress the story too slowly. With the Brexit scenario pre-test I am inclined to agree. It might help to saying at the beginning that there will be 5 iterations, ending in the year 2025. With the MSC scenario pre-test I am less certain, it seems to be moving on at a pace I would not have predicted
    2. I am now thinking it may also be useful to spell out in advance the number of iterations that will take place. And perhaps even suggest each one will represent a given increment in time, say a month or a year, or…
  8. What limits should there be on the length of the text that participants submit?
    1. I have really wobbled on this issue, ranging from 100-word limits to 50-word limits to no voiced limits at all. Perhaps when people select which storyline to continue the length of the previous contributions will be something they take into account? I would like to hear participants views on this issue. Should there be word limits, and if so, what sort of limit?
  9. What sort of editorial intervention should there be by the facilitator, if any?
    1. I have been tempted, more than once, to ask some participants to reword and revise their contribution. I now limit myself to very basic spelling corrections, checked with the participant, if necessary. I was worried that some participants have a limited grasp of the scenario topic, but now think that just has to be part of the reality, some people have little to go on when anticipating specific the future, and others may have “completely the wrong idea”, according to others. As the facilitator, I now think I need to stand back and let things run.
    2. Another thought I had some time ago is that the facilitator could act as the spokesperson for “the wider context”, including any actors not represented by any of the participant’s contributions so far. At the beginning of a new iteration, they could provide some contextual text that participants are encouraged to bear in mind when designing their next contribution. If so, how / where should this context information be presented?
  10. How long should a complete exercise take?
    1. The current pre-tests are stretching out over a number of weeks. But I think this will be an exception. In a workshop setting where all participants (or teams of) have access to a laptop and internet, it should be possible to move through a quite a few iterations within a couple of hours. In other non-workshop settings perhaps a week will be long enough, if all participants have a stake in the process. Compacting the available time might generate more concentration and focus. The web app now under development should also radically reduce the turnaround time between iterations because manual work done by the facilitator will be automated.
  11. Is my aim to have participants evaluate the completed storylines realistic?
    1. After the last iteration, I plan to ask each participant, probably via an online survey page, to identify: (a) the most desirable storyline, (b) the most likely to happen storyline. But I am not sure if this will work. Will participants be willing to read every storyline from beginning to end? Or will they make judgments on the basis of the last addition to each storyline, which they will be more familiar with? And how much will this bias their judgments (and how could I identify if it does)?
  12. What about the contents??
    1.  One concern I have is the apparent lack of continuity between some of the contributions to a storyline. Is this because the participants are very diverse? Or because I have not stressed the importance of continuity? Or because I can’t see the continuity that others can see?
    2. What else should we look for when evaluating the content as a whole? One consideration might be the types of stakeholders who are represented or referred to, and those which seem to be being ignored
  13. How should performance measures be used?
    1. Elsewhere I have listed a number of ways of measuring and comparing how people contribute and how storylines are developed. Up to now, I have thought of this primarily as a useful research tool, which could be used to analyze storylines after they have been developed.
    2. But after reading a paper on “gamification” of scenario planning it occurred to me that some of these measures could be more usefully promoted at the beginning of a scenario planning exercise, as measures that participants should be aware of and even seek to maximize when deciding how and where to contribute. For example, one measure is the number of extensions that have been added to a participant’s texts by other participants, and the even distribution of those contributions (known as variety and balance).
  14. Stories as predictions
    1. Most writers on scenario planning emphasize that scenarios are not meant to be predictions, but more like possibilities that need to be planned for
    2. But if ParEvo was used in a M&E context, could participants be usefully encouraged to write story segments as predictions, and then be rewarded in some way if they came true? This would probably require an exercise to focus on the relatively near future, say a year or two at the most, with each iteration perhaps only covering a month or so.
  15. Tagging of story segments
    1. It is common practice to use coding / tagging of text contents in other settings. Would it be useful with ParEvo? An ID tag is already essential, to be able to identify and link story segments.
  16. What other issues are arising and need discussion?
    1. Over to you…to comment below
    2. I also plan to have one to one skype conversations with participants, to get your views on the process and products
%d bloggers like this: