Evaluating simple interventions that turn out to be not so simple

Conditional Cash Transfer (CCT) programs have been cited in the past as examples of projects that are suitable for testing via randomised control trials. They are relatively simple interventions that can be delivered in a standardised manner. Or so it seemed.

Last year Lant Pritchett, Salimah Samji and Jeffrey Hammer wrote this interesting (if at times difficult to read) paper “It’s All About MeE: Using Structured Experiential Learning (‘e’) to Crawl the Design Space“(the abstract is reproduced below). In the course of that paper they argued that CCT programs are not as simple as they might seem. Looking at three real life examples they identified at least 10 different characteristics of CCTs that need to be specified correctly in order for them to work as expected. Some of these involve binary choices (whether to do x or y) and some involve tuning of a numerical variable. This means there were at least 2 to the power of 10 i.e. 1024 different possible designs. They also pointed out that while changes to some of these characteristics make only a small difference to the results achieved others, including some binary choices, can make quite major differences. In other words, overall it may well be a rugged rather than a smooth design space. The question then occurs, how well are RCTs suited to exploring such spaces?

Today the World Bank Development Blog posted an interesting confirmation of the point made in Pritchett et al paper, in a blog posting titled:  Defining Conditional Cash Transfer Programs: An Unconditional Mess. Basically they are in effect pointing out that the design space is even way more complicated than Princhett et al describe!. They conclude

So, if you’re a donor or a policymaker, it is important not to frame your question to be about the relative effectiveness of “conditional” vs. “unconditional” cash transfer programs: the line between these concepts is too blurry. It turns out that your question needs to be much more precise than that. It is better to define the feasible range of options available to you first (politically, ethically, etc.), and then go after evidence of relative effectiveness of design options along the continuum from a pure UCT to a heavy-handed CCT. Alas, that evidence is the subject of another post…

So stay tuned fore their next installment. Of course you could quibble with the fact that even this conclusion is a bit optimistic, in that it talks about a a continuum of design options, when in fact it is multi-dimensional space  with both smooth and rugged bits

PS: Here is the abstract for the Printchett paper:

“There is an inherent tension between implementing organizations—which have specific objectives and narrow missions and mandates—and executive organizations—which provide resources to multiple implementing organizations. Ministries of finance/planning/budgeting allocate across ministries and projects/programmes within ministries, development organizations allocate across sectors (and countries), foundations or philanthropies allocate across programmes/grantees. Implementing organizations typically try to do the best they can with the funds they have and attract more resources, while executive organizations have to decide what and who to fund. Monitoring and Evaluation (M&E) has always been an element of the accountability of implementing organizations to their funders. There has been a recent trend towards much greater rigor in evaluations to isolate causal impacts of projects and programmes and more ‘evidence base’ approaches to accountability and budget allocations Here we extend the basic idea of rigorous impact evaluation—the use of a valid counter-factual to make judgments about causality—to emphasize that the techniques of impact evaluation can be directly useful to implementing organizations (as opposed to impact evaluation being seen by implementing organizations as only an external threat to their funding). We introduce structured experiential learning (which we add to M&E to get MeE) which allows implementing agencies to actively and rigorously search across alternative project designs using the monitoring data that provides real time performance information with direct feedback into the decision loops of project design and implementation. Our argument is that within-project variations in design can serve as their own counter-factual and this dramatically reduces the incremental cost of evaluation and increases the direct usefulness of evaluation to implementing agencies. The right combination of M, e, and E provides the right space for innovation and organizational capability building while at the same time providing accountability and an evidence base for funding agencies.” Paper available as pdf

I especially like this point  about within-project variation (on which I have argue for in the past): “Our argument is that within-project variations in design can serve as their own counter-factual and this dramatically reduces the incremental cost of evaluation and increases the direct usefulness of evaluation to implementing agencies

 

US Govt Executive Order — Making Open and Machine Readable the New Default for Government Information

(from The White House,  Office of the Press Secretary, For Immediate Release, May 09, 2013)

Executive Order — Making Open and Machine Readable the New Default for Government Information

EXECUTIVE ORDER

– – – – – – –

MAKING OPEN AND MACHINE READABLE THE NEW DEFAULT
FOR GOVERNMENT INFORMATION

By the authority vested in me as President by the Constitution and the laws of the United States of America, it is hereby ordered as follows:

Section 1. General Principles. Openness in government strengthens our democracy, promotes the delivery of efficient and effective services to the public, and contributes to economic growth. As one vital benefit of open government, making information resources easy to find, accessible, and usable can fuel entrepreneurship, innovation, and scientific discovery that improves Americans’ lives and contributes significantly to job creation.

Decades ago, the U.S. Government made both weather data and the Global Positioning System freely available. Since that time, American entrepreneurs and innovators have utilized these resources to create navigation systems, weather newscasts and warning systems, location-based applications, precision farming tools, and much more, improving Americans’ lives in countless ways and leading to economic growth and job creation. In recent years, thousands of Government data resources across fields such as health and medicine, education, energy, public safety, global development, and finance have been posted in machine-readable form for free public use on Data.gov. Entrepreneurs and innovators have continued to develop a vast range of useful new products and businesses using these public information resources, creating good jobs in the process.

To promote continued job growth, Government efficiency, and the social good that can be gained from opening Government data to the public, the default state of new and modernized Government information resources shall be open and machine readable. Government information shall be managed as an asset throughout its life cycle to promote interoperability and openness, and, wherever possible and legally permissible, to ensure that data are released to the public in ways that make the data easy to find, accessible, and usable. In making this the new default state, executive departments and agencies (agencies) shall ensure that they safeguard individual privacy, confidentiality, and national security.

Sec. 2. Open Data Policy. (a) The Director of the Office of Management and Budget (OMB), in consultation with the Chief Information Officer (CIO), Chief Technology Officer (CTO), and Administrator of the Office of Information and Regulatory Affairs (OIRA), shall issue an Open Data Policy to advance the
management of Government information as an asset, consistent with my memorandum of January 21, 2009 (Transparency and Open Government), OMB Memorandum M-10-06 (Open Government Directive), OMB and National Archives and Records Administration Memorandum M-12-18 (Managing Government Records Directive), the Office of Science and Technology Policy Memorandum of February 22, 2013 (Increasing Access to the Results of Federally Funded Scientific Research), and the CIO’s strategy entitled “Digital Government: Building a 21st Century Platform to Better Serve the American People.” The Open Data Policy shall be updated as needed.

(b) Agencies shall implement the requirements of the Open Data Policy and shall adhere to the deadlines for specific actions specified therein. When implementing the Open Data Policy, agencies shall incorporate a full analysis of privacy, confidentiality, and security risks into each stage of the information lifecycle to identify information that should not be released. These review processes should be overseen by the senior agency official for privacy. It is vital that agencies not release information if doing so would violate any law or policy, or jeopardize privacy, confidentiality, or national security.

Sec. 3. Implementation of the Open Data Policy. To facilitate effective Government-wide implementation of the Open Data Policy, I direct the following:

(a) Within 30 days of the issuance of the Open Data Policy, the CIO and CTO shall publish an open online repository of tools and best practices to assist agencies in integrating the Open Data Policy into their operations in furtherance of their missions. The CIO and CTO shall regularly update this online repository as needed to ensure it remains a resource to facilitate the adoption of open data practices.

(b) Within 90 days of the issuance of the Open Data Policy, the Administrator for Federal Procurement Policy, Controller of the Office of Federal Financial Management, CIO, and Administrator of OIRA shall work with the Chief Acquisition Officers Council, Chief Financial Officers Council, Chief Information Officers Council, and Federal Records Council to identify and initiate implementation of measures to support the integration of the Open Data Policy requirements into Federal acquisition and grant-making processes. Such efforts may include developing sample requirements language, grant and contract language, and workforce tools for agency acquisition, grant, and information management and technology professionals.

(c) Within 90 days of the date of this order, the Chief Performance Officer (CPO) shall work with the President’s Management Council to establish a Cross-Agency Priority (CAP) Goal to track implementation of the Open Data Policy. The CPO shall work with agencies to set incremental performance goals, ensuring they have metrics and milestones in place to monitor advancement toward the CAP Goal. Progress on these goals shall be analyzed and reviewed by agency leadership, pursuant to the GPRA Modernization Act of 2010 (Public Law 111-352).

(d) Within 180 days of the date of this order, agencies shall report progress on the implementation of the CAP Goal to the CPO. Thereafter, agencies shall report progress quarterly, and as appropriate.

Sec. 4. General Provisions. (a) Nothing in this order shall be construed to impair or otherwise affect:
(i) the authority granted by law to an executive department, agency, or the head thereof; or

(ii) the functions of the Director of OMB relating to budgetary, administrative, or legislative proposals.

(b) This order shall be implemented consistent with applicable law and subject to the availability of appropriations.

(c) This order is not intended to, and does not, create any right or benefit, substantive or procedural, enforceable at law or in equity by any party against the United States, its departments, agencies, or entities, its officers, employees, or agents, or any other person.

(d) Nothing in this order shall compel or authorize the disclosure of privileged information, law enforcement information, national security information, personal information, or information the disclosure of which is prohibited by law.

(e) Independent agencies are requested to adhere to this order.

BARACK OBAMA

 

Webinar series on evaluation: The beginnings of a list

To be extended and updated, with your help!

  • American Evaluation Association: Coffee Break Demonstrations are 20 minute long webinars designed to introduce audience members to new tools, techniques, and strategies in the field of evaluation.
  • INTERACTION: Impact Evaluation Guidance Note and Webinar Series: 8 webinars covering Introduction to Impact Evaluation, Linking Monitoring and Evaluation to Impact Evaluation, Introduction to Mixed Methods in Impact Evaluation, Use of Impact Evaluation Results
  • Measure Evaluation webinars:     20 webinars since Jan 2012
  • Claremont Evaluation Center Webinar Series  “The Claremont Evaluation Center is pleased to offer a series of webinars on the discipline and profession of evaluation.  This series is free and available to anyone across the globe with an internet connection.”
  • MY M&E website: Webinars on Equity-focused evaluations (17 webinars), IOCE webinar series on evaluation associations, Emerging practices in development evaluation (6 webinars), Developing capacities for country M&E systems (16 webinars), Country-led M&E Systems (6 webinars)

Plus some guidance on developing and evaluating webinars

ICAI Seeks Views on Revised Evaluation Framework

 

 “In our first report, ICAI’s Approach to Effectiveness and Value for Money,we set out an evaluation framework, consisting of 22 questions under 4 guiding criteria (objectives, delivery, impact and learning), to guide our lines of enquiry in reviews. In the light of our experience to date in carrying out our reports, we have reviewed this framework. The revised framework is available at this link: ICAI revised evaluation framework

We are now entering a period of consultation on the revised framework which will run until 24 May 2013. If you have any comments or views, please email enquiries@icai.independent.gov.uk  or post them to: The Secretariat, Independent Commission for Aid Impact, Dover House, 66 Whitehall, London SW1A 2AU”

Comic book Theories of Change?

Inspired by visitors” positive responses to the imaginative use of flow charts I have wondered how else Theories of Change could be described. The following thought came to me early this morning!

(with apologies to South Park)

See 6 Free Sites for Creating Your Own Comics, at Mashable, for links to stripgenerator and others

AN OFFER: I will give a £50 donation to Oxfam UK to the person who can come up with the best comic strip description of the Theory of Change of a real development project. Post your entry using Comment below, with a link to where the comic is and a link to where we can find a factual description of the project it represents. Your comic strip version can be as humorous(slapstick, farce, wit, irony, sarcasm, parody, gallows, juvenile, or…) or as serious as you like. It can be as long as you like and it does not need to be a simple sequence of panels, it could get way more complicated!

I will try to set up an opinion poll so visitors can vote for the ones they like the most. The winning entry will definitely be posted as an item here on MandE NEWS and be publicised via Twitter. The deadline: May 31st might do. One proviso: Nothing obscene or libelous

AEA resources on Social Network Analysis and Evaluation

American Evaluation Association (AEA) Social Network  Analysis (SNA) Topical Interest Group (TIG) resources

AEA365 | A Tip-a-Day by and for Evaluators

Who Counts? The power of participatory statistics

Edited By Jeremy Holland, published by Practical Action. 2013

(from the Practical Action website) “Local people can generate their own numbers – and the statistics that result are powerful for themselves and can influence policy. Since the early 1990s there has been a quiet tide of innovation in generating statistics using participatory methods. Development practitioners are supporting and facilitating participatory statistics from community-level planning right up to sector and national-level policy processes. Statistics are being generated in the design, monitoring and evaluation, and impact assessment of development interventions.Through chapters describing policy, programme and project research, Who Counts? provides impetus for a step change in the adoption and mainstreaming of participatory statistics within international development practice. The challenge laid down is to foster institutional change on the back of the methodological breakthroughs and philosophical commitment described in this book. The prize is a win–win outcome in which statistics are a part of an empowering process for local people and part of a real-time information flow for those aid agencies and government departments willing to generate statistics in new ways. Essential reading for researchers and students of international development as well as policy-makers, managers and practitioners in development agencies.”
Table of Contents
1 Introduction Participatory statistics: a ‘win–win’ for international development Jeremy Holland
PART I Participatory statistics and policy change
2 Participatory 3-dimensional modelling for policy and planning: the practice and the potential , Giacomo Rambaldi
3 Measuring urban adaptation to climate change: experiences in Kenya and Nicaragua Caroline Moser and Alfredo Stein
4 Participatory statistics, local decision-making, and national policy design: Ubudehe community planning in Rwanda  ,Ashish Shah
5 Generating numbers with local governments for decentralized health sector policy and planning in the Philippines , Rose Marie R. Nierras
6 From fragility to resilience: the role of participatory community mapping, knowledge management, and strategic planning in Sudan , Margunn Indreboe Alshaikh
Part II Who counts reality? Participatory statistics in monitoring and evaluation ,
7 Accountability downwards, count-ability upwards: quantifying empowerment outcomes from people’s own analysis in Bangladesh , Dee Jupp with Sohel Ibn Ali
8 Community groups monitoring their impact with participatory statistics in India: reflections from an international NGO Collective , Bernward Causemann, Eberhard Gohl, C. Rajathi, A. Susairaj, Ganesh Tantry and Srividhya Tantry,
9 Scoring perceptions of services in the Maldives: instant feedback and the power of increased local engagement , Nils Riemenschneider, Valentina Barca, and Jeremy Holland
10 Are we targeting the poor? Lessons with participatory statistics in Malawi , Carlos Barahona
PART III Statistics for participatory impact assessment
11 Participatory impact assessment in drought policy contexts: lessons from southern Ethiopia , Dawit Abebe and Andy Catley
12 Participatory impact assessment: the ‘Starter Pack Scheme’ and sustainable agriculture in Malawi , Elizabeth Cromwell, Patrick Kambewa, Richard Mwanza, and Rowland Chirwa with KWERA Development Centre,
13 Participatory impact assessments of farmer productivity programmes in Africa Susanne Neubert
Afterword , Robert Chambers
Practical and accessible resources
Index

Real Time Monitoring for the Most Vulnerable

.
.
Greeley, M., Lucas, H. and Chai, J. IDS Bulletin 44.2
Editor Greeley, M. Lucas, H. and Chai, J. Publisher IDS

Purchase a print copy here.

View abstracts online and subscribe to the IDS Bulletin.

Growth in the use of real time digital information for monitoring has been rapid in developing countries across all the social sectors, and in the health sector has been remarkable. Commonly these Real Time Monitoring (RTM) initiatives involve partnerships between the state, civil society, donors and the private sector. There are differences between partners in understanding of objectives,and divergence occurs due to adoption of specific technology-driven approaches and because profit-making is sometimes part of the equation.

With the swarming, especially of pilot mHealth initiatives, in many countries there is risk of chaotic disconnects, of confrontation between rights and profits, and ofoverall failure to encourage appropriate alliances to build sustainable and effective national RTM systems. What is needed is a country-led process for strengthening the quality and equity sensitivity of real-time monitoring initiatives. We propose the development of an effective learning and action agenda centred on the adoption of common standards.

IDS, commissioned and guided by UNICEF Division of Policy and Strategy, has carriedout a multi-country assessment of initiatives that collect high frequency and/or time-sensitive data on risk, vulnerability and access to services among vulnerable children and populations and on the stability and security of livelihoods affected by shocks. The study, entitled Real Time Monitoring for the Most Vulnerable (RTMMV), began with a desk review of existing RTMinitiatives and was followed up with seven country studies (Bangladesh, Brazil,Romania, Senegal, Uganda, Vietnam and Yemen) that further explored and assessed promising initiatives through field-based review and interactive stakeholder workshops. This IDS Bulletin brings together key findings from this research.”

See full list of papers on this topic at the IDS Bulletin  http://www.ids.ac.uk/publication/real-time-monitoring-for-the-most-vulnerable

Enhancing Evaluation Use: Insights from Internal Evaluation Units

Marlène Läubli Loud , John Mayne

John Mayne’s summary (especially for MandE NEWS!)

“The idea for the book was that much written about evaluation in organizations is written by outsiders such as academics and consultants. But in practice, there are those working ‘inside’ an organization who play a key role in helping shape, develop, manage and ultimately make use of the evaluation. The contributions in this book are written by such ‘insiders’. They discuss the different strategies used over a period of time to make evaluation a part of the management of the organization, successes and failures, and the lessons learned. It highlights the commissioners and managers of evaluations, those who seek evaluations that can be used to improve the strategies and operations of the organization. The aim of the book is to help organizations become more focused on using evaluation to improve policies, strategies, programming and delivery of public and communal services.

The chapters cover a wide range of organizations, from government departments in Scotland, new Zealand, Switzerland and Canada, to international organizations such as the World health organization (WHO) and the International labour organization (ILO), to supra-national organizations such as the European Commission.

The book discusses such issues as:

  • The different ways evaluation is set up—institutionalized—in government sectors / organizations, and with what results;
  • why it is so hard to make evaluation a regular aspect of good management;
  • building organizational cultures that support effective evaluation;
  • strategies that are being used to ensure better value for money and enhance utilization of evaluation findings in organizations; and
  • how organizations balance the need for timely, relevant evaluation information with the need for scientific integrity and quality.

The insider perspective and the wide scope of organizations covered is unique in discussion about evaluation in organizations.”

“Hey Jude” Theory of Change…

Complete with “If…and…then” logic and even feedback loops (indicating an iterative approach to problem-solving). But where are the means of verification? ;-)

See more on the history of lyric flow charts here

And the final word is from http://xkcd  Don’t look down, you may never come back  ;-))