“Evaluation and Inequality: Moving Beyond the Discussion of Poverty” International Development Evaluation Association Global Assembly

Bridgetown, Barbados (May 6-9, 2013)

IDEAS website

Introduction:

The Board of the International Development Evaluation Association (IDEAS) is pleased to announce its next Global Assembly on May 7-9, 2013 in Bridgetown, Barbados, preceded by professional development training sessions on May 6. The theme of the Assembly will be on the relation of evaluation and inequality and their influence on development. The theme of this coming assembly underscores the role that evaluative knowledge can play in development in general and more particularly in the focus on the sustaining factors that generate and perpetuate poverty.

Assembly Agenda and Call for Paper/Panel Proposals:

The Assembly will organize itself into a number of substantive strands. Each of these strands will be discussed here. Potential presenters are invited to make a proposal for a paper or panel in one or more of these strands. General paper proposals on topics of evaluation outside the theme of the strands are also invited. We especially invite papers that are grounded in development experiences.

Strand One: Understanding Inequality and its relation to the causes and consequences of poverty

Strand Two: Effective program strategies to address inequality—findings from evaluation

Strand Three: Regional responses/regional strategies to address inequality

Strand Four: The measurement and assessment of inequality

Strand Five: General Paper Sessions—all other papers/panels being proposed on any evaluation topic

All paper/panel proposals should be sent by January 10, 2013 to: Ray C. Rist, President of IDEAS, at the following e-mail address: rayrist11@gmail.com

Proposal Guidelines:

1) Each paper or panel proposal can be no more than 250 words in total. This proposal should include the title, name (s) of participants, affiliation of participants; and brief description of the subject of the paper/panel.

2) The date for submission of all proposals is January 10, 2013!!

3) Consideration of any proposal after January 10 is at the full discretion of the chair.

4) Decisions on all proposals will be made within two weeks and presenters will be informed immediately.

 

Scholarships: There will be some few scholarships available to ensure a global representation of development evaluators at this Assembly. First priority for scholarships will be for current IDEAS Members who present a paper/panel or are actively involved in the Assembly as a panel chair or discussant.

NOTE: Anyone who wishes to present at this Assembly will have to be a present member of IDEAS.

Predicting the achievements of the Katine project

September 2010: This post provides information on a revised proposal for a “Predictions Survey” on the achievements of the Katine Community Partnerships Project, a project managed by AMREF and funded by the Guardian and Barclays Bank, between 2007 and 2011.

Background Assumptions

The Guardian coverage of the Katine project has provided an unparalleled level of public transparency to the workings of an aid project. As of August 2010 there have been approximately 530 articles posted on the site, most of which have specifically about Katine. These posts have included copies of project documentation (plans, budgets, progress reports, review reports) that often don’t enter the public realm.

Ideally this level of transparency would have two benefits: (a) improving UK public knowledge about the challenges of providing effective aid, (b) imposing some constructive discipline on the work of the NGO concerned, because they know they are under continuing scrutiny not only locally, but internationally. Whether this has actually been the case is yet to be systematically assessed. However I understand the effects on the project and its local stakeholders  (i.e b above) will be subject to review by Ben Jones later this year, and then open to discussion in a one day event in November, to be organised by the Guardian.

So far there have been two kinds of opportunities for the British, and other publics, to be engaged with the public monitoring of the Katine project. One has been through posting comments on the articles on the Guardian website. About 30% of all articles have provided this opportunity, and these articles have attracted an average of 5 comments . The other option has been by invitation from the Guardian, to make a guest posting on the website. This invitation has been extended to specialists in the UK and elsewhere.  Multiple efforts have also been made to hear different voices from within the Katine community itself

The Predictions Survey would provide another kind of opportunity for participation. It would be an opportunity for a wide range of participants to:

  • to make some judgments about the overall achievements of the project, and
  • to explain those judgments, and
  • to see how those judgments compared to that of others, and
  • to see how those judgments compare to the facts, about what has actually been achieved at the end of the project

In addition a Predictions Survey would provide a means of testing expectations that greater transparency can improve public knowledge about the challenges of providing effective aid.

My proposal is that that the Prediction Survey would consist of five batches of questions, one for each project component, on a separate page. Each question would be a multiple choice question, but associated with an optional Comment field. People could respond on the basis of their existing knowledge of the project (which could vary widely) and/or extra information about the website obtained via component specific links embedded at the head of each page of the online survey e.g. on water and sanitation. Questions at the end of the survey would identify participants’ sources of knowledge about the project (e.g. obtained before and during the survey, from the website and elsewhere).

A 1st rough draft survey form is already available to view. Any responses entered at this stage may be noted, but they will then be deleted and not included in any final analysis.  The final design of the survey will require close consultation with AMREF and the Guardian.

Intended participants in the survey

  • UK public, reached via the Guardian
  • Uganda public, reached via Ugandan newspapers (likely to be more of a challenge)
  • AMREF staff, especially in Uganda, Kenya HQ and UK
  • The Guardian and Barclays, as donors
  • Monitoring and Evaluation specialists, reached via an international email list

Hypotheses (predictions about the predictions)

  1. We might expect that AMREF would be able to make the most accurate predictions, given its central role. But aid agencies are often tempted to put a gloss on their achievements, because of the gap that sometimes emerges between their ambitions and what can actually be done in practice.
  2. We might expect that participants who have been following the Guardian coverage closely since the beginning might be better informed and make better predictions than others who have become interested more recently. But perhaps those participants are still responding on the basis of their original beliefs (aka biases)?
  3. We might expect M&E specialists to make better than average predictions because of their experience in analysing project performance. But perhaps they have become too skeptical about everything they read
  4. We might expect the Guardian and Barclays staff to make better than average predictions because they have been following the project closely since inception and their organisation’s money is  invested in it. But perhaps they only want to see success.
  5. We might expect the highest frequency choices (across all groups) to be more accurate than the choices of any of the above groups, because of a ” wisdom of crowds” effect. The potential of crowdsourcing was of interest to the Guardian at the beginning of the project, and this survey could be seen as a form of crowdsourcing – of judgements.

This list is not final. Other hypotheses  could be identified in the process of consultation over the design of the survey

There may also be other less testable predictions worth identifying. For example, about the effects of this Prediction Survey on the work done by AMREF and its partners in the final year up to October 2011. Might it lead to a focus on what is being measured by the survey, to the detriment of other important aspects of their work?  If AMREF has a comprehensive monitoring framework and the prediction survey addresses the same breadth of performance (and not just one or two performance indicators) this should not be a problem.

Timeframe

The fourth and final year of the project starts in October 2010 and ends in October 2011.

The finalisation of the design of the Predictions Survey will require extensive consultation with AMREF and the Guardian, in order to ensure the fullest possible ownership of the process, and thus the results that are generated. Ideally this process might be completed by late-October 2010

The survey could be open from late October to the end of March 2011 (six months before the end of the project). All responses would be date stamped to take account of any advantages of being a later participant

A process will need to be agreed in 2010 on how objective information can be obtained on which of the multiple choice options have eventuated by October 2011.

A post 2011 follow up survey may be worth considering. This would focus on predictions of what will happen in the post-project period, up to 2014, the year of the vision statement produced by participants in the September 2009 stakeholders workshop in Katine.

“In 2014, Katine will be an active, empowered community taking responsibility for their development with decent health, education, food security and able to sustain it with the local government”

Supporters

The participation of the Guardian and AMREF will be very important, although it is conceivable that the survey could be run independently of their cooperation

Assistance with publicity, to find participants, would be needed from the Guardian and Barclays

Advisory support is being sought from the One World Trust

Advisory support from other other organisations could also be useful

The online survey could be designed and managed by Rick Davies. However responsibility could be given to another party that was agreed to by AMREF, Guardian and Barclays.

Challenges

  • The survey design needs to be short enough to encourage people to complete it, but not so short that important aspects of the project’s performance are left out
  • The description of the objectives used in the survey needs to be as clear and specific as possible, but also keep as close to AMREF’s original words as possible (i.e. as in the 4th year extension proposal, and using the M&E framework, now being updated)
  • Participants will be asked to make a single choice between multiple options, describing what might happen. These options will need to be carefully chosen, so there are no obvious “no brainers”, and to cover a range of plausible possibilities
  • It may be necessary in some cases (e.g. with some broadly defined objectives) to allow multiple choices from multiple options
  • I have heard that AMREF will be conducting a final evaluation in late 2011, using an external consultant. This evaluation could be the source of the final set of data on actual performance, against which participant’s predictions could be compared. But will it be seen as a sufficiently independent source of information?

A digression on complexity and networks…

….a side argument from the Rick on the Road post: Cynefin Framework versus Stacey Matrix versus network perspectives

In that post I said

PS1:Michael Quinn Patton’s book on Developmental Evaluation has a whole chapter on “Distinguishing Simple, Complicated, and Complex”. However, I was surprised to find that despite the book’s focus on complexity, there was not a single reference in the Index to “networks”. There was one example of a network model (Exhibit 5.3) , contrasted with a Linear Program Logic Model…” (Exhibit 5.2), in the chapter on Systems Thinking and Complexity Concepts. [I will elaborate further]

One interpretation: Complexity arises through the interaction of many agents having some degree of autonomy. With no autonomy there is simple order (complete predictability), with complete autonomy there is chaos (no predictablity). How do we define autonomy? One view: Autonomy = The number of possible relationships an actor can have with others. When realised, this can be measured in terms of  network density (a Social Network Analysis (SNA) measure). Two cariacature examples of the extremes: 1. An army, with a hierarchical chain of command,  is highly ordered. Here the network structure is  sparse (i.e.  a tree structure) and low in density. 2. “Economic man” , who is free to interact with anyone, in order to maximise his/her utility. Here all possible relationships can be realised, as everyone interacts with everyone. Complexity is the territory in between where actors have some degree of choice of who they interact with. And where there is some degree of predictability. When realised, those choices can also be described in terms of different kinds of network structures. So if we want to explore complex systems we need to look at the structure of networks of actors, both as “initial conditions” affecting what happens next and as “final states”, reflecting what has happened over a given period of time. I.e. an empirical approach, not mysticism :-)

PS: The concept of autonomy could probably be further differentiated, in terms of relationship choices, as follows : (a) the range of relationships available to an actor, already discussed above (b) the freedom to choose amongst those that are available, (c) the range of behaviors available within a given relationship. But how do you measure freedom (b) ? One measure might be the degree to which any choices made are uncorrellated with other events. The diversity of choices made could also be important. Diversity suggests freedom from constraint (more on this theme here).