Workshop: Understanding the effects of development interventions: Theory-based impact evaluation in practice.

A three-day workshop jointly organised by Maastricht University and the University of Antwerp.

Date: April 28-30, 2010
Venue: Institute of Development Policy and Management, Lange Sint Annastraat 7, 2000 Antwerp, Belgium

Focus


In the past few years development organizations in both North and South have focused on improving evaluations of development interventions. As a result, demand for highquality impact evaluation has increased. Impact evaluation refers to the growing field of evaluative practices aimed at assessing the effects of a broad range of policy interventions. As well as enhancing the accountability of public spending in development, it has the potential to be an important learning tool, allowing us to understand what works and why under what conditions. To help impact evaluation to achieve this potential evaluators and policy makers need to open the ‘black box’ of policy interventions. Attention should be paid not only to the changes caused by an intervention but also to understanding how and why these changes have been brought about. In other words interventions should be considered as theories and evaluations are the tools for reconstructing, refining and testing these theories. This is the essence of theory-based evaluation.
The workshop focuses on the concept and application of theory-based impact evaluation in development. It starts with an overview of the key issues in impact evaluation and development effectiveness. Subsequently, the principles of theory-based impact evaluation will be discussed. Particular attention is paid to how a theory of change provides a framework for further inquiry. Starting out from a theory based perspective, different methodological approaches, from review and synthesis of existing evidence to full-scale empirical inquiry, will be presented and illustrated. These methods and modalities can form the basis for an institutional strategy on impact evaluation and learning, which is the subject of the final part of the workshop. Continue reading “Workshop: Understanding the effects of development interventions: Theory-based impact evaluation in practice.”

Workshop: Designing and Building a Results-Based Monitoring and Evaluation System

Date: Sunday   23-May-10   09:00 AM   till: Thursday   27-May-10   05:00 PM
Venue: Amman, Jordan

Join us in Amman, Jordan for a 5-days training workshop in designing and building a results-based M&E system.  The course is organized by the Jordan Education Initiative (JEI) and will heavily depend on the material developed in conjunction with International Program for Development Evaluation Training (IPDET) and the Independent Evaluation Group of the World Bank.

Combining both attention to M&E theory and practical applications of the theory, the 40-45 participants completing the course will gain knowledge and experience on how to design a results-based M&E system, understand the M&E tools, techniques, and resources needed for planning, organizing, and/or managing programs and projects.  The course provides an overview of the theoretical foundations of M&E methods, as well as ample opportunities for practical application of these methods through case studies, course exercises, and group projects.  The discussions will be practical and hands on. Participants will work in teams, discuss problem scenarios, share experiences, and develop M&E tools that they can apply in their own jobs following the course.

FOR MORE DETAILS

Visit http://www.jei.org.jo/#/18

CONTACT DETAILS:

Email: JEI@JEI.ORG.JO

Tel: +962 6 5502360 Fax: +962 6 5502370

Training: Logical Framework Analysis for Programme & Project Planning

Date: 26 March 2010
Venue: Banbury, Oxfordshire, UK.

LFA is the project planning tool used by major international donor organizations such as the World Bank.  It allows organizations to define objectives in a simple, rigorous, logical and concise manner. It has the power to communicate complex objectives clearly and understandably on a single sheet of paper, and is a great ‘aid to thinking’ for project planners and stakeholders alike. LFA serves as a powerful tool for identifying: inputs, assumptions for success, and indicators for monitoring progress and evaluating performance.  This course will be invaluable for anyone who needs to run, or participate in, any LFA process.The course presenter has extensive experience with LFA: having worked with a number of donor agencies (including Australian Aid, DANCED, the EU, and the World Bank) as well as with a range of NGO’s in a number of countries.
Course fee: £100.

Course registration form available here.

Please note: this course is held in Banbury, Oxfordshire, however we can also run this course at your own venue, on a date to suit – please contact us for details.

Knowledge Management in an Organization of the Poor

In “Knowledge Management in an Organization of the Poor” (2009), Aldo Benini and Bhabatosh Nath visit a federation of poor people in Bangladesh that, by its own initiative and unassisted by outsiders, conducted a survey of all extremely poor households in the local government area. The federation then linked this information to a critical resource listing – an inventory of government-owned lands supposed to be allocated to the poor. The creative re-interpretation of the survey concept and a tactically variable involvement in project-related data collection drives earn the federation the title of knowledge manager. Its significance, in a highly stratified world of development expertise, is in the demonstrated ability of poor people to map their own complex environment, on their own terms, and for their own betterment.

*Outcome Mapping Training in Switzerland – June 28 to July 2, 2010

Date: June 28 to July 2, 2010
Venue: Switzerland

We from Agridea International are happy to announce another learning event and training on Outcome Mapping in Switzerland. From June 28 until July 2, 2010, we will spend 5 days getting to know the nuts and bolts of planning, monitoring and evaluating with Outcome Mapping.

We will discuss specific issues regarding the facilitation of OM planning events, key challenges and solutions for participative monitoring and sense-making, and we will organise peer-learning sessions for your own cases and realities.

The fusion of other existing approaches, methodologies and guiding principles with Outcome Mapping is another workshop topic.

Do not miss this unique opportunity for learning from the experts – in a practice oriented way.

For further information or registration contact Mr. Carsten Schulz
email: carsten.schulz@agridea.ch

website : www.agridea-international.ch/training

PAQI How-to-do-it Annex

This is a technical annex to http://mande.co.uk/special-issues/participatory-aggregation-of-qualitative-information-paqi/

Data processing steps

The network diagrams were produced using UCINET & NetDraw (a package). Very briefly, this involved producing the following files:

  • For relationships between sorted items:
    • Create a .txt file in a specific Dl file format, known as PARTITION, as shown in this example
      • This shows five sets of sort results, seperated by a # marker. With each set, each row shows a set of items put into one group, by the participant
    • Convert this to the first Ucinet file, using these commands: Data>Inputs text file>Input text file in DL format:
    • Aggregate the five sets of data into one items x items matrix, by using these commands: Transform>Matrix Operations>Within Dataset>Aggregations>Input dataset: [the new file you created], Sum, Break-out resultsby: rows and columns
    • Then view the saved file in Netdraw. View with link strength >1, because you want to see the connections created by multiple participants, not one.
  • For relationships between categories used
    • Take the original .txt file in PARTITION format and re-structure it as a .txt file in another Dl file format known as EDGELIST2, as shown in this example.
      • N=66 because there are 24 items and 42 categories. A1-4 are categories used by the first respondent, B1-6, by the second etc. Each row lists items put in that category
    • Convert this to the second Ucinet file, using these commands: Data>Inputs text file>Input text file in DL format:
    • This shows a categories x items matrix
    • This needs to be converted to a one mode matrix, of categories x categories. Use these commands: Data>Affiliations (2-mode to 1-mode)>Input data set:
    • Then view the saved file in Netdraw. View with link strength >1, because all categories will have at least one shared item with others.
      • PS: You can also use Netdraw to visualise the two-mode categories x items matrix (See Necheles reference below)
  • For relationships between the respondents
    • Use these commands: Tools>Similarities (e.g.correlations,)> Input Datset: name of first Ucinet file above, Measure of profile similarity: Correlation,  Compute similarities amongsts: Columns.
    • You then have a matrix of correlation values, ranging from 0 to 1. To make these easier to discriminate, when using NetDraw, it is best to multiple them by 100. Use these commands: Transform>Matrix Operations>Within Dataset>Cellwise Transformations>Multiply by constant
    • Then view the saved file in Netdraw. Focus on relationships with above average strength (because all participants will have some similarities in their classifications)

PS:  I have set up a seperate posting on the merits of different kinds of social network analysis software, including UCINET and NetDraw.

Adding qualitative “flesh” to the quantitative “bones”

The network diagrams are the structure. They are the results of all the sorting activities by all the participants. But in the process of sorting the items each participants also added qualitative information, in the form of descriptions of the categories they created. In the Indonesian example 33 category descriptions were provided by the 5 participants. This next section will describe how that qualitative information can be made accessable, as people explore the individual nodes and links in the network diagrams. This information will be in the form of node and link attributes.

With the Indonesian data  I listed the members of each grouping of items in a row, and then in an adjacent column I entered the text description of that group given by the participant. When all the groupings of one respondent were entered I started with the next respondent’s groupings on the rows below

The challenge is to then collate all text descriptions that apply to a given item and to do that for all items, in a way that is not manually time consuming. To do this I set up a list of items (in rows), and in adjacent columns I set up a logic function that in effect searched for relevant text. A copy of the Excel sheet will be attached here.

This data then needs to be  put into an attribute.txt format (example here) and then imported into Netdraw as an attribute file, when already viewing the item x  item network (File>Open>VNA text file>Attributes). Then any node can be double right clicked to view its attributes, including all the descriptions given to it by the participants (See example). Bear in mind these are descriptions of the categories it belongs to, not that specific item.

New INTRAC publications on M&E

Tracking Progress in Advocacy – Why and How to Monitor and Evaluate Advocacy Projects and Programmes looks at the scope of, and rational for, engaging in advocacy work as part of development interventions, then focuses on the monitoring and evaluating of these efforts – offering reasons why and when these processes should be planned and implemented, what’s involved, and who should be engaged in the process. By Janice Griffen, Dec, 2009

The Challenges of Monitoring and Evaluating Programmes offers some clarity in understanding the different uses of the term ‘programme’, and uses the different types of programme to demonstrate the issues that arise for M&E. By Janice Griffen, Dec, 2009

Against Transparency: The perils of openness in government.

Being a keen advocate of greater transparency by aid agencies and programmes this article interested me.

See it on the New Replublic website, published on October 9, 2009

The author, Lawrence Lessig, “is professor of law and director of the Edmond J. Safra Center for Ethics at Harvard Law School, and the author most recently of Remix: Making Art and Commerce Thrive in the Hybrid Economy (Penguin). He is on the advisory board of the Sunlight Foundation and on the board of Maplight.org

The Use of Social Network Analysis Tools in the Evaluation of Social Change Communications

by Rick Davies (April 2009).

This paper was produced for the Communication for Social Change Consortium, as a contribution to their paper for UNAIDS on reviewing approaches to monitoring and evaluation and advocating an expanded monitoring and evaluation framework for social change communication. All rights to this paper are with the Communication for Social Change Consortium (www.cfsc.org).]

Contents

1.Background..
2. What is Social Network Analysis? A brief introduction..
3. The use of SNA in the study of HIV/AIDS..
4. The use of SNA in the evaluation of HIV/AIDS interventions..
5. How could SNA be useful in the evaluation of HIV/AIDS programs?.
5.1. Within organisations: Moving from Logical to Social Frameworks.
5.2. Within organisations: Moving beyond linear models.
5.2.1 Mapping and modeling.
5.2.2 Looking inside and outside the network.
5.2.3 Matrix versus network models.
5.3. Amongst multiple organisations: Where there is no central planner.
6. The uses of theory..
7. Scalability..
8. Limitations..
9 Opportunities..
10. An Afterword..
References..

Metaevaluation revisited, by Michael Scriven

An Editorial in Journal of MultiDisciplinary Evaluation, Volume 5, Number 11, January 2009

In this short and readable paper Michael Scriven addresses “three categories of issues that arise about meta-evaluation: (i) exactly what is it; (ii) how is it justified; (iii) when and how should it be used? In the following, I say something about all three—definition, justification, and application.” He then makes seven main points, each of which he elaborates on in some detail:

  1. Meta-evaluation is the consultant’s version of peer review.
  2. Meta-evaluation is the proof that evaluators believe what they say.
  3. In meta-evaluation, as in  all evaluation, check the pulse before trimming the nails.
  4. A partial meta-evaluation is better than none.
  5. Make the most of meta-evaluation.
  6. Any systematic approach to evaluation—in other words, almost any kind of professional evaluation—automatically provides a systematic basis for meta-evaluation.
  7. Fundamentally, meta-evaluation, like evaluation, is simply an extension of common sense—and that’s the first defense to use against the suggestion that it’s some kind of fancy academic embellishment.
%d bloggers like this: