Training: Evaluating the quality of humanitarian projects

Training course – ‘Evaluating the Quality of humanitarian projects, postponed to 19-23 May 2008, Plaisians, France
This 5-day course will take place in the training centre at Groupe URD’s headquarters. The course looks at evaluation (definition, phases, types…) with regard to the Quality of humanitarian action.
It is organised around a full case study based on real experiences in the field.
The course is aimed at Programme managers, Project managers, M&E managers, evaluators and all other positions in the humanitarian sector involved in project management.
Language : French.
Registration form
Contact

UKES Annual Conference 2008, October 2008

Date: 23-24 October 2008
Venue: The Bristol Marriott Royal Hotel

Changing Contexts for Evaluation and Professional Action

UKES Annual Conference 2008

23-24 October 2008

The Bristol Marriott Royal Hotel


Call for abstracts

The UKES Annual Conference 2008 will be held on 23-24 October at the Bristol Marriott Royal Hotel, preceded on 22 October by a programme of Training and Professional Development Workshops.

The closing date for call for abstracts is 23 May 2008.

Conference website

The European Evaluation Society Conference, Lisbon 2008

Date: 1-3 October 2008
Venue: Lisbon, Portugal

‘Building for the future: Evaluation in governance, development and progress’

Programme

The conference will comprise keynote speakers, plenary and parallel sessions, networking opportunities and social events.

The main programme will commence at 19.00 on Wednesday 1 October and close with a reception at 18.00 on Friday 3 October.

The conference will be preceded by a programme of Pre-Conference Training and Professional Development Workshops commencing on Tuesday Morning 30 September and closing at 16.30 hrs on Wednesday 1 October (4 half day sessions). Read more about how you can contribute to these sessions.

The programme will include the following keynote presentations:

Development Evaluation at a Crossroads?
Niels Dabelstein, Evaluation of the Paris Declaration, Danish Institute for International Studies, Denmark

Evaluation and policy implementation in multi-level governance.
Nicoletta Stame, Professor of Social Policy, University of Roma “La Sapienza”, Italy

Evaluating drug policies in the European Union: current situation and future directions
Dra. Maria Moreira, European Monitoring Centre for Drugs and Drug Addiction, Portugal

Making evaluation more useful for determining development effectiveness
Dr Vinod Thomas, Director-General, Independent Evaluation Group (IEG), World Bank Group, Washington DC, USA

Parallel Sessions

We received over 350 abstracts from participants from 49 countries world-wide. These submissions will be presented in themed parallel sessions of papers, symposia, roundtables, panel/debates, posters and other innovative formats. A full programme will be available well in advance of the conference from this website.

Parallel sessions will be grouped under the following strands:
1. Methodological choices and approaches
2. Ethical practice, evaluation standards and professional development
3. Evaluation and its role in policy implementation and programme intervention
4. Evaluation and building informal social capital
5. International evaluation and evaluation in developing countries
6. Evaluation and organisational development
7. Encouraging evaluation use

The May 2008 Edinburgh Evaluation Summer School

Date: May 26th – 28th
Venue: University of Edinburgh, Scotland

(from Eval-Sys email list)

Dear Colleagues

The Edinburgh Evaluation Summer School
(http://www.chs.med.ed.ac.uk/ruhbc/evaluation/summerschool2008) is now in
its 3rd year; the prior summer schools have attracted interest from
evaluators and planners from all disciplines. As in prior years, the Summer
School faculty will include world class evaluation scholars and practitioners. As
well as providing a platform of learning from world class evaluation scholars
and practitioners, the Summer School also fosters a community of learning
and gives participants a platform to raise questions and discuss topics
relevant to evaluation.

The Edinburgh Evaluation Summer School will this year take place on the
following dates:

May 19th – 21st: Evaluations and Inequalities
May 26th – 28th: Getting Real About Impact Evaluation

Further details on each class and how to secure your place on this
year’s courses can be found on the Summer School website.
http://www.chs.med.ed.ac.uk/ruhbc/evaluation/summerschool2008/index.html

This website will be updated regularly so please check periodically.
Don’t hesitate to reach us at RUHBC.SummerSchool@ed.ac.uk if
you have any questions.

Best wishes,

Sanjeev Sridharan

Head Evaluation Programme
RUHBC
University of Edinburgh
Teviot Place
Edinburgh, EH8 9AG
Scotland, UK

next Outcome Mapping training workshop in Europe

Date: April 22-24, 2008
Venue: Ede, The Netherlands

(from the OM email list)

Colleagues,
This is to let you know that MDF Training & Consulting will be hosting my next Outcome Mapping training workshop in Europe at its headquarters in Ede, The Netherlands, April 22-24, 2008. You can see the full description of this introductory course at: http://www.mdf.nl/index.php/page/85/outcome-mapping?mod[MDFCourseCalendarModule][item

]=95

If you have colleagues or partners who may be interested, they can register at:
http://www.mdf.nl/index.php/page/85

Cheers,
Terry

Quote:

Terry Smutylo
542 Fraser Ave.
Ottawa, Ontario
Canada, K2A 2R4

Tel(613)729-6844
Fax(613)288-8993
tsmutylo@magma.ca (tsmutylo@magma.ca)
tsmutylo@gmail.com (tsmutylo@gmail.com)

Regulatory Impact Analysis (RIA) Training Course

Date: 6-10 October
Venue: College of Europe, Bruges Campus, Belgium

Dear Colleague,

The College of Europe and Jacobs and Associates Europe invite you to participate in our 5-Day Regulatory Impact Analysis (RIA) Training Course on the principles, procedures, and methods of RIA. This practical, hands-on, course was given in March, and due to demand, will be offered two more times in 2008 — in June and October. The course, by the most experienced public policy and RIA trainers in Europe, is expressly designed for policy officials and executives who use RIA to improve policy results.

The course will benefit any official using RIA in environmental, social and economic fields as well as stakeholders such as business associations, NGOs and consultants who want to understand better how to use RIA constructively. The course is open for subscription worldwide and is presented in the historic city of Bruges, Belgium. A discount is offered for early registration.

Information on RIA Training Course

2008 DATES: 23-27 June and 6-10 October (each course is 5 full days)
LOCATION: College of Europe, Bruges Campus, Belgium
REGISTRATION : For more information and application form go to www.coleurope.eu/ria2008
COST:

  • €2,995 for early registration (includes housing and meals)
  • €3,495 for regular registration (includes housing and meals)

REGISTRATION DEADLINES:

Early registration for the June course runs until 11 May 2008.
Registration closes 1 June 2008.

Early registration for the October course runs until 10 August 2008.
Registration closes on 14 September 2008.

OPEN : World-wide (only 40 seats available per session)
LANGUAGE OF INSTRUCTION: English
COURSE OFFERED BY: College of Europe and Jacobs and Associates Europe

The College of Europe provides a wide range of professional training courses, workshops and tailor-made seminars on the European Union in general or on targeted issues. For more information, please visit:
www.coleurope.eu/training or contact Mrs. Annelies Deckmyn by email: adeckmyn@coleurop.be

Jacobs and Associates continues to offer its tailored RIA training courses on-site around the world, adapted to the client’s needs. To discuss an on-site RIA course, contact ria@regulatoryreform.com. For information on the full range of regulatory reform work by Jacobs and Associates, see http://www.regulatoryreform.com/.

Best wishes,
Marc
Scott Jacobs
Managing Director, Jacobs and Associates Europe

The Third High Level Forum on Aid Effectiveness (HLF 3)

Date: 2-4 September 2008
Venue: Accra, Ghana

The Third High Level Forum on Aid Effectiveness (HLF 3) will be hosted in Accra by the Government of Ghana on 2-4 September 2008. The HLF 3 builds on several previous high level international meetings, most notably the 2003 Rome HLF which highlighted the issue of harmonisation and alignment, and the 2005 Paris HLF which culminated with the endorsement of the Paris Declaration on Aid Effectiveness by over 100 signatories from partner governments, bilateral and multilateral donor agencies, regional development banks, and international agencies. The primary intention of the HLF 3 is to take stock and review the progress made in implementing the Paris Declaration, also broaden and deepen the dialogue on aid effectiveness by giving ample space and voice to partner countries and newer actors (such as Civil Society Organsations and emerging donors). It is also a forward-looking event which will identify the action needed and bottlenecks to overcome in order to make progress in improving aid effectiveness for 2010 and beyond. The HLF 3 will be organised as a three-tier structure: * The Marketplace, which will provide an opportunity for a wide range of actors to showcase good and innovative practices and lessons from promoting aid effectiveness; * Roundtable meetings, which will provide an opportunity for in-depth discussion on selected key issues to facilitate and support decision taking and policy endorsement on aid effectiveness; and * Ministerial-Level Meeting, which is expected to conclude the HLF 3 with an endorsement of a ministerial statement based on high-level discussions and negotiation around key issues.

Related items:

Making Smart Policy: Using Impact Evaluation for Policy Making

Date: January 15 and 16, 2008
Venue: Washington, DC, USA

January 15 and 16, 2008, Preston Auditorium, World Bank Headquarters, Washington, DC

The Poverty Reduction and Economic Management (PREM) Network, the Independent Evaluation Group (IEG), and the Development Economics Vice Presidency (DEC) of the World Bank are pleased to announce a conference “Making Smart Policy: Using Impact Evaluation for Policy Making.

The one-and-a-half-day conference will bring together policy makers and staff from development agencies (see Speaker Bios) to explore how to design and use impact evaluation for increased policy impact and how to generate greater demand for impact evaluations.

Presentations

The Role of Impact Evaluations in Assessing Development Effectiveness

The Role of Impact Evaluations in Development Agencies

Evidence and Use: Parallel Sector Sessions

Reporting Back from Sector Sessions

Role of Impact Evaluation in National Policy

Impact Evaluation Initiatives at the World Bank

ParEvo – a web assisted participatory scenario planning process

The purpose of this page

…is to record some ongoing reflections on my experience of running two pre-tests of ParEvo carried out in late 2018 and early 2019.

Participants and others are encouraged to add their own comments, by using the Comment facility at the bottom of this page

Two pre-tests are underway

  • One involves 11 participants developing a scenario involving the establishment of an MSC (Most Significant Change) process in a development programme in Nigeria. These volunteers were found via the MSC email list. They came from 7 countries and 64% were women.
  • The other involves 11 participants developing a Brexit scenario following Britain failing to reach an agreement with the EU by March 2019. These participants were found via the MandE NEWS email list. They came from 9 countries and 46% were women.

For more background (especially if you have not been participating) see this 2008 post on the process design and this 2019 Conference abstract talking about these pre-tests

Reflections so far

Issues arising…

  1. How many participants should there be?
    • In the current pre-tests, I have limited the number to around 10. My concern is that with larger numbers there will be too many story segments (and their storylines) for people to scan and make a single preferred selection. But improved methods of visualising the text contributions may help overcome this limitation. Another option is to allow/encourage individual participants to represent teams of people, e.g. different stakeholder groups. I have not yet tried this out.
  2. Do the same participants need to be involved in each iteration of the process?
    1. My initial concern is that not doing so would make some of the follow up quantitative analysis more difficult, but I am not so concerned about that now, its a manageable problem. On the other hand, it is likely that some people will have to drop out mid-process, and ideally, they could be replaced by others, thus maintaining the diversity of storylines.
  3. How do you select an appropriate topic for a scenario planning exercise?
    1. Ideally, it would be a topic that was of interest to all the participants and one which they felt some confidence in talking about, even if only in terms of imagined futures. One pre-test topic, the use of MSC in Nigeria, was within these bounds. But the other was more debatable: the fate of the UK after no resolution of BREXIT terms by 29th March 2019
  4. How should you solicit responses from participants?
    1. I started by sending a standard email to all the (MSC scenario) participants, but this has been cumbersome and has risks. It is too easy to lose track of who contributed what text, to add to what existing storyline. I am now using two-part single question survey via SurveyMonkey. This enables me to keep a mistake-free record of who contributed what to what, and who has responded and who has not. But this still involves sending multiple communications, including reminders, and I have sometimes confused what I am sending to whom.  A more automated systems is definitely needed.
  5. How should you represent and share participants responses?
    1. This has been done in two forms. One is a tree diagram, showing all storylines, where participants can mouseover nodes to immediately see each text segment. Or they can click on each node to go to a separate web page and see complete storylines. These are both laborious to construct, but hopefully will soon be simplified and automated via some tech support which is now under discussion. PS: I have now resorted to only using the tree diagram with mouseover.
  6. Should all contributions be anonymous?
    1. There are two types of contributions: (a) the storyline segments contributed during each iteration of the process, (b) Comments made on these contributions, that can be enabled on the blog page that hosts each full storyline to date. This second type was an afterthought, whereas the first is central to the process.
    2. The first process of contributing to storylines designed to make authorship anonymous, so people would focus on the contents.  I think this remains a good feature.
    3. The second process of allowing people to comment has pros and cons. The advantage is that it can enrich the discussion process, providing a meta-level to the main discussion which is the storyline development. The risk, however, is that if the comments are not enabled to be anonymous then a careful reader of the comments can sometimes work out who made which storyline contributions. I have tried to make comments anonymous but they still seem to reveal the identity of the person making the comment. This may be resolvable. PS: This option is now not available, while I am only using the tree diagram to show storylines. This may need to be changed.
  7. How many iterations should be completed?
    1. It has been suggested that participants should know this in advance, so that their story segments don’t leap in the future too quickly, or the reverse, progress the story too slowly. With the Brexit scenario pre-test I am inclined to agree. It might help to saying at the beginning that there will be 5 iterations, ending in the year 2025. With the MSC scenario pre-test I am less certain, it seems to be moving on at a pace I would not have predicted
    2. I am now thinking it may also be useful to spell out in advance the number of iterations that will take place. And perhaps even suggest each one will represent a given increment in time, say a month or a year, or…
  8. What limits should there be on the length of the text that participants submit?
    1. I have really wobbled on this issue, ranging from 100-word limits to 50-word limits to no voiced limits at all. Perhaps when people select which storyline to continue the length of the previous contributions will be something they take into account? I would like to hear participants views on this issue. Should there be word limits, and if so, what sort of limit?
  9. What sort of editorial intervention should there be by the facilitator, if any?
    1. I have been tempted, more than once, to ask some participants to reword and revise their contribution. I now limit myself to very basic spelling corrections, checked with the participant, if necessary. I was worried that some participants have a limited grasp of the scenario topic, but now think that just has to be part of the reality, some people have little to go on when anticipating specific the future, and others may have “completely the wrong idea”, according to others. As the facilitator, I now think I need to stand back and let things run.
    2. Another thought I had some time ago is that the facilitator could act as the spokesperson for “the wider context”, including any actors not represented by any of the participant’s contributions so far. At the beginning of a new iteration, they could provide some contextual text that participants are encouraged to bear in mind when designing their next contribution. If so, how / where should this context information be presented?
  10. How long should a complete exercise take?
    1. The current pre-tests are stretching out over a number of weeks. But I think this will be an exception. In a workshop setting where all participants (or teams of) have access to a laptop and internet, it should be possible to move through a quite a few iterations within a couple of hours. In other non-workshop settings perhaps a week will be long enough, if all participants have a stake in the process. Compacting the available time might generate more concentration and focus. The web app now under development should also radically reduce the turnaround time between iterations because manual work done by the facilitator will be automated.
  11. Is my aim to have participants evaluate the completed storylines realistic?
    1. After the last iteration, I plan to ask each participant, probably via an online survey page, to identify: (a) the most desirable storyline, (b) the most likely to happen storyline. But I am not sure if this will work. Will participants be willing to read every storyline from beginning to end? Or will they make judgments on the basis of the last addition to each storyline, which they will be more familiar with? And how much will this bias their judgments (and how could I identify if it does)?
  12. What about the contents??
    1.  One concern I have is the apparent lack of continuity between some of the contributions to a storyline. Is this because the participants are very diverse? Or because I have not stressed the importance of continuity? Or because I can’t see the continuity that others can see?
    2. What else should we look for when evaluating the content as a whole? One consideration might be the types of stakeholders who are represented or referred to, and those which seem to be being ignored
  13. How should performance measures be used?
    1. Elsewhere I have listed a number of ways of measuring and comparing how people contribute and how storylines are developed. Up to now, I have thought of this primarily as a useful research tool, which could be used to analyze storylines after they have been developed.
    2. But after reading a paper on “gamification” of scenario planning it occurred to me that some of these measures could be more usefully promoted at the beginning of a scenario planning exercise, as measures that participants should be aware of and even seek to maximize when deciding how and where to contribute. For example, one measure is the number of extensions that have been added to a participant’s texts by other participants, and the even distribution of those contributions (known as variety and balance).
  14. Stories as predictions
    1. Most writers on scenario planning emphasize that scenarios are not meant to be predictions, but more like possibilities that need to be planned for
    2. But if ParEvo was used in a M&E context, could participants be usefully encouraged to write story segments as predictions, and then be rewarded in some way if they came true? This would probably require an exercise to focus on the relatively near future, say a year or two at the most, with each iteration perhaps only covering a month or so.
  15. Tagging of story segments
    1. It is common practice to use coding / tagging of text contents in other settings. Would it be useful with ParEvo? An ID tag is already essential, to be able to identify and link story segments.
  16. What other issues are arising and need discussion?
    1. Over to you…to comment below
    2. I also plan to have one to one skype conversations with participants, to get your views on the process and products

Conference: Evaluation 2008 – Evaluation Policy and Evaluation Practice

Date: November 5 – 8, 2008
Venue: Denver, Colorado

The American Evaluation Association invites evaluators from around the world to attend its annual conference to be held Wednesday, November 5, through Saturday, November 8, 2008 in Denver, Colorado. We will be meeting right in the heart of the city at the Hyatt Regency.

AEA’s annual meeting is expected to bring together approximately 2500 evaluation practitioners, academics, and students, and represents a unique opportunity to gather with professional colleagues in a supportive, invigorating, atmosphere.

The conference is broken down into 41 Topical Strands that examine the field from the vantage point of a particular methodology, context, or issue of interest to the field as well as the Presidential Strand highlighting this year’s Presidential Theme of Evaluation Policy and Evaluation Practice. Presentations may explore the conference theme or any aspect of the full breadth and depth of evaluation theory and practice.