Open consultation Triennial review of the Independent Commission for Aid Impact (ICAI)

(Website that hosts the text below)

This consultation closes on 26 April 2013

On 21 March the Government announced the triennial review of the Independent Commission for Aid Impact (ICAI) and is seeking views of stakeholders who wish to contribute to the Review. Triennial Reviews of Non-Departmental Public Bodies (NDPBs) are part of the Government’s commitment to review all NDPBs, with the aim of increasing accountability for actions carried out on behalf of the State.

The ICAI’s strategic aim is to provide independent scrutiny of UK aid spending, to promote the delivery of value for money for British taxpayers and to maximise the impact of aid.

The Review will be conducted in line with Cabinet Office principles and guidance, in two stages.

The first stage will:

  • Identify and examine the key functions of the ICAI and assess how these functions contribute to the core business of DFID;
  • Assess the requirement for these functions to continue given other scrutiny processes;
  • If continuing, assess how the key functions might best be delivered; if one of these options is continuing delivery through the ICAI, then make an assessment against the Government’s “three tests”: technical function; political impartiality; and the need for independence from Ministers.

If the outcome of stage one is that delivery should continue through the ICAI, the second stage of the review will:

  • Review whether ICAI is operating in line with the recognised principles of good corporate governance, using the Cabinet Office “comply or explain” standard approach.

In support of these aims we would welcome input and evidence from stakeholders, focused on these main questions:

ICAI’s functions

For the purposes of this review, we have defined ICAI’s key functions as follows:

  • Produce a wide range of independent, high quality/professionally credible and accessible reports (including evaluations, VfM reviews, investigations) setting out evidence of the impact and value for money of UK development efforts;
  • Work with and for Parliament to help hold the UK Government to account for its development programme, and make information on this programme available to the public;
  • Produce appropriately targeted recommendations to be implemented/ followed up by HMG.

Which of these functions do you think are still needed? What would be the impact if ICAI ceased to exist?

Would you define ICAI’s functions differently?

Do you think any of the following delivery mechanisms would be more appropriate or cost effective at delivering these functions: Local government, the voluntary sector, private sector, another existing body or DFID itself?

To date, do you think ICAI has focused on scrutinising UK aid spend or the wider HMG development effort? What do you think it should be doing?

Where do you think ICAI sits on the spectrum between audit and research? Is this where they should be?

How far can and should ICAI have a role in holding HMG to account?

Production of reports

What is the quality of ICAI reports? Is the expertise of those producing the reports appropriate? How does this compare to other scrutiny bodies that you know of?

How far does the methodology used by ICAI add value to other scrutiny of DFID programmes (eg IDC, NAO, DFID internal)?

How far does ICAI involve beneficiaries in its work?

What impact have ICAI reviews had on DFID staff and resources?

How independent do you believe ICAI is? How important do you think this independence is for ICAI’s ability to deliver its functions effectively?

How much of an impact do you think the Commissioners have on ICAI recommendations and reports? What added value do you think they bring? Do they have the right skillset?

Making information available to the IDC and the public

How important do you think ICAI’s role is in providing information about UK development to taxpayers?

What impact has ICAI had on public perceptions of UK development?

Production of targeted recommendations

What has been the added value of ICAI’s recommendations? How do these compare to other scrutiny bodies that you know of?

How far and why have recommendations been followed up?

What impact has ICAI had on DFID’s own approach to monitoring impact and value for money?

How far has ICAI promoted lesson learning in DFID?

General

Do you think ICAI could improve? If so, how do you think ICAI could improve?

Do you have any other comments?

The government is seeking views of stakeholders on the Triennial Review of the Independent Commission for Aid Impact (ICAI).

Contact us by 26 April 2013

Write to us:

email
post
ICAI Review Team KB 2.2
22 Whitehall
London
SW1A 2EG

“Evaluation and Inequality: Moving Beyond the Discussion of Poverty” International Development Evaluation Association Global Assembly

Bridgetown, Barbados (May 6-9, 2013)

IDEAS website

Introduction:

The Board of the International Development Evaluation Association (IDEAS) is pleased to announce its next Global Assembly on May 7-9, 2013 in Bridgetown, Barbados, preceded by professional development training sessions on May 6. The theme of the Assembly will be on the relation of evaluation and inequality and their influence on development. The theme of this coming assembly underscores the role that evaluative knowledge can play in development in general and more particularly in the focus on the sustaining factors that generate and perpetuate poverty.

Assembly Agenda and Call for Paper/Panel Proposals:

The Assembly will organize itself into a number of substantive strands. Each of these strands will be discussed here. Potential presenters are invited to make a proposal for a paper or panel in one or more of these strands. General paper proposals on topics of evaluation outside the theme of the strands are also invited. We especially invite papers that are grounded in development experiences.

Strand One: Understanding Inequality and its relation to the causes and consequences of poverty

Strand Two: Effective program strategies to address inequality—findings from evaluation

Strand Three: Regional responses/regional strategies to address inequality

Strand Four: The measurement and assessment of inequality

Strand Five: General Paper Sessions—all other papers/panels being proposed on any evaluation topic

All paper/panel proposals should be sent by January 10, 2013 to: Ray C. Rist, President of IDEAS, at the following e-mail address: rayrist11@gmail.com

Proposal Guidelines:

1) Each paper or panel proposal can be no more than 250 words in total. This proposal should include the title, name (s) of participants, affiliation of participants; and brief description of the subject of the paper/panel.

2) The date for submission of all proposals is January 10, 2013!!

3) Consideration of any proposal after January 10 is at the full discretion of the chair.

4) Decisions on all proposals will be made within two weeks and presenters will be informed immediately.

 

Scholarships: There will be some few scholarships available to ensure a global representation of development evaluators at this Assembly. First priority for scholarships will be for current IDEAS Members who present a paper/panel or are actively involved in the Assembly as a panel chair or discussant.

NOTE: Anyone who wishes to present at this Assembly will have to be a present member of IDEAS.

Free relevant well organised online courses: Statistics, Model Thinking and others

Provided FREE by Coursera in cooperation with Princeton, Stanford and other Universities

Each opening page gives this information: about the Course, About the Instructor, The Course Sylabus, Introductory Video, Recommended Background, Suggested Readings, Course Format, FAQs,

Example class format: “Each week of class consists of multiple 8-15 minute long lecture videos, integrated weekly quizzes, readings, an optional assignment and a discussion. Most weeks will also have a peer reviewed assignment, and there will be the opportunity to participate in a community wiki-project. There will be a comprehensive exam at the end of the course.”

The contents of past courses remain accessible.

RD Comment: Highly Recomended! [ I am doing the stats course this week]

Evaluating the impact of aid to Africa: lessons from the Millennium Villages

3 July 2012 17:00-18:30 (GMT+01 (BST)) – Public event, Overseas Development Institute and screened live online

Register to attend

“At the turn of the century, Jeffrey Sachs of Columbia University, in partnership with the United Nations, established integrated rural development projects, known as Millennium Villages in ten African countries. When they came to be evaluated in 2011, an intense row broke out between development experts about their impact and sustainability.

ODI and the Royal Africa Society are delighted to host Michael Clemens who will argue that aid projects in Africa need much more careful impact evaluations that are transparent, rigorous, and cost-effective. Our panel of experts will also discuss the Millenium Villages project within the wider context of international aid to Africa, analysing other development models and questioning the impact of each one.”

AEA Conference: Evaluation in Complex Ecologies

Relationships, Responsibilities, Relevance
26th Annual Conference of the American Evaluation Association
Minneapolis, Minnesota, USA
Conference: October 24-27, 2012
Workshops: October 22, 23, 24, 28

“Evaluation takes place in complex global and local ecologies where we evaluators play important roles in building better organizations and communities and in creating opportunities for a better world. Ecology is about how systems work, engage, intersect, transform, and interrelate. Complex ecologies are comprised of relationships, responsibilities, and relevance within our study of programs, policies, projects, and other areas in which we carry out evaluations.

Relationships. Concern for relationships obliges evaluators to consider questions such as: what key interactions, variables, or stakeholders do we need to attend to (or not) in an evaluation? Evaluations do not exist in a vacuum disconnected from issues, tensions, and historic and contextualized realities, systems, and power dynamics. Evaluators who are aware of the complex ecologies in which we work attend to relationships to identify new questions and to pursue new answers. Other questions we may pursue include:

  • Whose interests and what decisions and relationships are driving the evaluation context?
  • How can evaluators attend to important interactions amidst competing interests and values through innovative methodologies, procedures, and processes?

Responsibilities. Attention to responsibilities requires evaluators to consider questions such as: what responsibilities, inclusive of and beyond the technical, do we evaluators have in carrying out our evaluations? Evaluators do not ignore the diversity of general and public interests and values in evaluation. Evaluations in complex ecologies make aware ethical and professional obligations and understandings between parties who seek to frame questions and insights that challenge them. Other questions we may pursue include:

  • How can evaluators ensure their work is responsive, responsible, ethical, equitable, and/or transparent for stakeholders and key users of evaluations?
  • In what ways might evaluation design, implementation, and utilization be responsible to issues pertinent to our general and social welfare?

Relevance. A focus on relevance leads to evaluations that consider questions such as: what relevance do our evaluations have in complex social, environmental, fiscal, institutional, and/or programmatic ecologies? Evaluators do not have the luxury of ignoring use, meaning, of sustainability; instead all evaluations require continual review of purposes, evaluands, outcomes, and other matters relevant to products, projects, programs, and policies. Other questions we may pursue include:

  • How can evaluators ensure that their decisions, findings, and insights are meaningful to diverse communities, contexts, and cultures?
  • What strategies exist for evaluators, especially considering our transdisciplinary backgrounds, to convey relevant evaluation processes, practices, and procedures?

Consider this an invitation to submit a proposal for Evaluation 2012 and join us in Minneapolis as we consider evaluation in complex ecologies where relationships, responsibilities, and/or relevance are key issues to address.”

 

Conference: Measuring Impact of Higher Education for Development

From: Monday 19th March 2012 to Tuesday 20th March 2012

Venue:  Birkbeck College, Malet Street, London

Organisers: London International Development Centre (LIDC); Association of Commonwealth Universities (ACU)

Background: Higher education for international development has been, in recent years, a neglected area relative to other educational interventions. Yet higher education (HE) is necessary for the attainment of Millennium Development Goals (MDGs) and for economic development in low and middle income countries.

There is a long history of development assistance interventions in HE to support development goals, directed at strengthening individual, organisational and institutional capacity. These have included scholarship programmes as well as support to specific universities and university networks in low and middle income countries, and support to academic research and training partnerships.
However, there has been little comparison of these different interventions in terms of their international development impact. This limits our understanding of “what works” in HE interventions for development, and our ability to invest effectively in future.
The aim of this two-day international conference is to examine the current status of impact evaluation for HE interventions and to identify research gaps and needs for the future. The conference will focus on three issues:
  • What has been, and should be, the development intention of HE interventions?
  • How should development impacts be measured?
  • What is our experience with measurement methods and tools to date, where are the gaps and what research priorities emerge?

The programme will be posted online soon.

Who should attend:

The conference will bring together experts from three research sectors: higher education, international development and impact evaluation from academia, think tanks, government agencies and civil society organisations. PhD students are welcome if their research is relevant to the theme of the conference.

Registration is open between 2 February and 5 March 2012.
To register, please fill in and return the registration form.
Attendance is free of charge.

Conference: Evaluation in a Complex World -Balancing Theory and Practice

April 29- May 1, 2012 (Sunday-Tuesday)
Seaview Resort, Galloway, NJ, USA. (http://www.dolce-seaview-hotel.com)

Organised by the Eastern Evaluation Research Society, a Regional Affiliate of the American Evaluation Association. Flyer available here

Keynote Speaker: Jennifer Greene, University of Illinois and President of AEA Featured Speakers: Eleanor Chelimsky, U.S. Government Accountability Office and former AEA President Rodney Hopson, Dusquesne University and incoming President of AEA

Sunday Afternoon Pre-Conference Workshops and Session: Meta Analysis Ning Rui, Research for Better Schools

Focus Group Research: Planning and Implementation Michelle Revels, ICF International

Career Talk with the Experts (NEW!): An unstructured conversation about your evaluation career This session is free to participants! Sunday Evening Interactive & Networking Session: John Kelley, Villanova University Concurrent Sessions Featuring: Skill Building Sessions, Individual Presentations & Panel Sessions

A full conference program will be posted at (www.eers.org) by Mid February 2012.

Assessing the impact of human rights work: Challenges and Choices

The International Council on Human Rights Policy has produced two documents under the above named project(See here for details of the project):

  • No Perfect Measure: Rethinking Evaluationand Assessment of Human Rights Work. Report of a Workshop, January 2012. Contents: Introduction and Context,,A Brief History,,NGO Hesitations, The Shift, Assessing the Impact of Policy Research, Impact Assessment in the context of Advocacy, Impact Assessment in the context of Capacity Building and Development, The Donor perspective, Third-Party Perspectives—Building a bridge, A note on integrating Human Rights Principles into development work, References, Selected Additional Bibliographic Resources
  • Role and Relevance of Human Rights Principles in Impact Assessment: An Approach Paper. July 2011. Contents: Introduction and Context, A Brief History, NGO Hesitations, The Shift, Assessing the Impact of Policy Research, Impact Assessment in the context of Advocacy
    Impact Assessment in the context of Capacity Building and Development
    The Donor perspective, Third-Party Perspectives—Building a bridge
    A note on integrating Human Rights Principles into development work
    References, Selected Additional Bibliographic Resources

PS 14 February 2012: It appears the ICHRP website is not working at present. I have uploaded a copy of the No Perfect Measure paper here

Conference about “The Future of Evaluation in Modern Societies”, Germany.

“The Center for Evaluation (CEval) of Saarland University, Germany, is a globally active research institute for applied social science in the field of evaluation and member of the DeGEval (German Evaluation Society). On this occasion, we organize an international conference about “The Future of Evaluation in Modern Societies” on 14th and 15th June 2012 in Saarbruecken, Germany.

The objective of this event is to discuss the role of evaluation in societies comprehensively and on an international comparison for bringing different discussion strands together into a joint debate. For keynote speeches and lectures, we could already win numerous renowned scientists from the USA, Latin America, Africa and Europe.

Please find the detailed program and registration form on our homepage: http://futureofevaluation.ceval.de

You also find a review about our recent book “A Practioner Handbook on Evaluation” which will appeal to evaluation practitioners, policy-makers who conduct evaluations in their daily work, students training in applied research and organizations which are implementing projects and programs that could be the subject of an evaluation.

—————————————

Maria Albrecht,  M.A., Center for Evaluation (CEval), Saarland University, P.O. Box 15 11 50, 66041 Saarbrücken – Germany, Fon: +49 (0)681 302-3561, Fax: +49 (0)681 302-3899, www.ceval.de

UKES CONFERENCE 2012 Evaluation for results: What counts? Who stands to gain? How is it done?

16 March 2012
The Macdonald Hotel, Birmingham

[from UKES website] UKES conferences address leading issues of the day in programme and policy evaluation. The 2012 Annual Conference will address the current drive towards evaluation focused on results – frequently linked to ‘Payment by Results’ and what, in international development and elsewhere, is familiar as ‘Results-Based Management’.

Evaluators and those who commission evaluation who advocate a focus on results reflect a legitimate concern with the productivity and efficiency of programmes and the capacity of interventions to secure gains and improvements in practice and provision. They point out that programmes should be held to account to accomplish what they were designed to do and paid for, often out of public funds. A primary focus on results seeks to emphasise main effects and outcomes that have been valued and agreed. In times of austerity and unusually scarce resources, proponents of a strong focus on results argue that emphasising value for money is socially responsible.

Others argue that an over-emphasis on measuring a programme’s results neglects important questions of how results are generated in a context, whether results capture the real quality and accomplishments of a programme, and how those results may reflect the values and ambitions of all programme stakeholders. They remind us of secondary effects and ‘unintended beneficiaries’ of programmes that may not be readily captured by results. Some also raise questions about the source of criteria over what counts as a worthwhile result given that not all programme achievements can be measured, and stakeholders may differ over a programme’s objectives. 

Against this background conference participants are invited to contribute their own perspectives on the dominant issues they consider relevant to the theory and practice of evaluation in the public interest. We anticipate a lively and informative debate to stimulate professional learning and to contribute to the improvement of evaluation practice and commissioning.

Potential contributors are invited to propose discussions, seminar presentations, lectures or poster sessions which explore issues around this theme. Those issues may fall within one of the following categories – though you are invited to propose your own theme:?

  • How do we define a valid ‘result’ and whose results get counted?
  • How do we best measure a result – including taking account of counterfactuals?
  • How do we understand where results came from, what significance they have and whether they can be replicated – i.e. what is the relation between a result and context?
  • Where do benchmarks come from to measure results achievement?
  • If a result is, say, a 4% improvement – how do we know whether that is a lot or a little under the circumstances?
  • How do we represent the circumstances and mechanisms that give rise to a result?
  • How do we account for programme accomplishments that are not represented in results?
  • Is results-measurement a robust foundation for replication/extension of a programme?

A formal call for papers and proposals for sessions will be circulated shortly.  The conference will be preceded on 15 March 2012 with a choice of training workshops on specialist topics.

%d bloggers like this: