ICAI Year 4 Workplan Consultation opens today

“ICAI is opening a public consultation to help shape our Year 4 Workplan today. We welcome any input that you may have and please feel free to pass on to colleagues. The closing date for responses is Friday 6th September.

The consultation document is available here: http://icai.independent.gov.uk/?p=1637&preview=true  and the text is copied below for your convenience.

ICAI Year 4 Workplan Consultation
The Independent Commission for Aid Impact is the independent body responsible for the scrutiny of UK aid expenditure (Official Development Assistance). We focus on maximising the effectiveness of the UK aid budget for intended beneficiaries and on delivering value for money for UK taxpayers.

We are holding a public consultation to inform the development of our workplan for our fourth year of operation (i.e. reports to be published between 12 May 2014 and 11 May 2015). We encourage anyone with an interest in the UK’s aid budget to take this opportunity to have their say and help to shape our plans.

As set out in our 2012-13 Annual Report, we will continue to be guided by our selection criteria of materiality, coverage, interest and risk, while also planning to diversify the types of work that we carry out to make the best use of our increasing body of evidence from our first three years of reports. We will consider different types of reports such as:

  • Programme reviews: this is likely to include scrutinising areas that we believe to be particularly important, given our findings and experience to date, and areas of particular interest to stakeholders;
  • Synthesis reports: this could include drawing together our increasing evidence base from the 35 reports we will have produced by the end of Year 3. This is likely to include some more thematic reviews, for example, synthesising and building upon our findings to date in a particular sector; and
  • More detailed follow-up: by Year 4, changes to impact for intended beneficiaries as a result of our first reports should be more visible. We will develop our approach to follow-up to understand these changes and to encourage further improvement and lesson-learning.

We would welcome your proposals for review topics. Please present these as concisely as possible, giving a summary of what you think ICAI should be reviewing and your reasons for suggesting the topic. Submissions should be no longer than 2,000 words.
It may help to look at our 2012-13 Annual Report (including our Year 3 Workplan in Chapter 5) and our report on ICAI’s Approach to Effectiveness and Value for Money when preparing your submission.

If you would like to submit views to ICAI, please send them to enquires@icai.independent.gov.uk.
The deadline for submissions is Friday 6th September 2013.

Sam Harrison
Communications Manager
Independent Commission for Aid Impact
Dover House | 66 Whitehall | London SW1A 2AW
020 7270 6742 |07500 224642
Due to an ongoing problem the voicemail on my office phone is incorrect. Please email or SMS me if I do not pick up for faster response.
www.independent.gov.uk/icai
Follow us on Twitter: @icai_uk

Who’s Afraid of Administrative Data? Why administrative data can be faster, cheaper and sometimes better

Reprinted in full from the World Bank blog “Development Impact
Written by Laura Rawlings, 26 June 2013.

“In talking about the importance of generating evidence for policy making, we sometimes neglect to talk about the cost of generating that evidence — not to mention the years it can take. Impact evaluations are critical, but most are expensive, time consuming and episodic. Policymakers increasingly rely on evidence to make sound decisions, but they want answers within a year or at most two—and their budgets for evaluation are often limited. As the Bank moves forcefully into impact evaluations, the question is how to make them not only effective – but more accessible.

Administrative data is one solution and there are a number of benefits to using it. By relying on regularly collected microdata, researchers can work with policymakers to run trials, generating evidence and answering questions quickly. Using administrative data can save hundreds of thousands of dollars over the cost of running the surveys needed to collect primary data – the single biggest budget item in most impact evaluations.

The benefits go on: The quality, as well as frequency, of administrative data collection is continuing to improve. Countries have databases tracking not only inputs and costs, but outputs and even outcomes. Quality data are now available on everything from health indicators like vaccination rates to student attendance and test scores—and information can often be linked across databases with unique IDs, which gives us a treasure chest of information. Indeed, “big data” is a buzzword these days, and as we move forward into evidence building, it’s important to realize that “big data,” when used properly, can also mean “better data”—more frequent, timely, and less costly.

Administrative data is particularly beneficial in helping test program design alternatives. Alternative options can be tested and assessed to see what route is most effective—and cost-effective.

Of course there are drawbacks as well. Administrative data can only answer questions to which the data are suited, and this rarely includes in-depth analysis of areas such as behavioral changes or consumption patterns. A recent impact evaluation of the long-term effects of a conditional cash transfer program in Colombia, for example, provided rich information about graduation rates and achievement test scores—but little in the way of information about household spending or the usage of health services, for example. And the information provided is usually relevant to individual beneficiaries of a specific program—rather than on the household level or between beneficiaries and non-beneficiaries.

Administrative data are also often of questionable quality: institutional capacity varies across the agencies that gather and manage the data and protocols for ensuring data quality are often not in place. Another drawback is accessibility: administrative data may not be publically available or organized in a way that is easily analyzed.

Clearly, researchers need to evaluate the usefulness of administrative data on a case-by-case basis. Some researchers at the World Bank who have weighed the pros and cons have embraced it as an important tool, as we saw in the impact evaluation of the Colombia program, which relied exclusively on administrative data. This included census data, baseline data from a previous impact evaluation, and the program database itself, as well as information– registration numbers and results– from a national standardized test. Linking all these data gave researchers answers in just six months at about one-fifth of the cost of an impact evaluation that would require traditional primary data collection. An impact evaluation looking at the results of Plan Nacer, a results-based financing program for women and children in Argentina, has done largely the same thing.

There are numerous examples outside the World Bank as well. David Halperin, director of the UK’s Behavioral Insights Team– commonly called “The Nudge Unit” for their work in encouraging changes in behaviors —routinely relies on administrative data. Together with his team, Halperin, who was at the Bank in early May to talk about their work, has discovered ways to encourage people to pay their court fines (send a text message with the person’s name, but not the amount they owe) and to reduce paperwork fraud (put the signature box at the beginning, rather than the end of the form). The research they are leading on changing behaviors relies on data that the government already has—producing results that are reliable, affordable and quick.

How can we move ahead? First, we need to learn to value administrative data – it may not get you a publication in a lofty journal, but it can play a powerful role in improving program performance. Second, we have to help our clients improve the quality and availability of administrative data. Third, we need a few more good examples of how good impact evaluations can be done with administrative data. Moving to a more deliberate use of administrative data will take effort and patience, but the potential benefits make it worth prioritizing.”

Rick Davies comment: Amen! Monitoring has been the poor cousin of evaluation for years and even more so with the recent emphasis on impact evaluation. Yet without basic data that should be collected during project implementation, routinely by project staff, most evaluations will be stymied, delivering only a fraction of the findings they could deliver. In large, complex, decentralised development projects evaluators need to know who participated in, or was reached by, what activities. This data can and should be routinely collected by project staff, at least for management purposes. So should short term outcome data, like participant satisfaction and/or use of services provided. The fact that there may be no external control group is not necessarily a problem, if the intention is not to make overall generalisations about average or net effects, but is instead to explore internal variation in access and use. That is where the more immediately useful lessons will be, which will aid improvement in project deisgn and effectiveness.

There are two developments which magnify the long standing argument for careful collection and use of monitoring/admin data. One is the move towards greater aid transparency, which should be inclusive of this kind of data, making it examinable and usable by a much wider range of sorrounding/public stakeholders than traditionally conceived of in project designs. The other is developments in data mining methods that enable pattern seeking and rule finding in such data sets, which can extend our horizons beyond what what we hope may be there, traditionally explored by hypothesis testing aproaches (valuable as they can be)

 

A guide for planning and strategy development in the face of complexity

By Richard Hummelbrunner and Harry Jones
ODI Background Note, March 2013. Available as pdf

“Many argue that governments, non-governmental organisations and international agencies should spend less time planning in advance and more time adapting programmes to changing scenarios and learning by doing. In the complex context of development, how can policy makers, managers and practitioners best plan in the face of complexity? Does complexity make planning an irrelevant exercise?

This Background Note is a guide, explaining how planning and strategy development can be carried out despite complexity. While it is true that complex situations require a greater focus on learning and adaptation, this does not render plan­ning irrelevant. In fact, there are ways in which the processes and products of planning can respect the realities of the situation and set up interven­tions (policies, programmes and projects) to give them the best chance of success.

The guide builds on academic, policy and programmatic literature related to themes around systems and complexity  and draws on the authors’ experience of advising development agencies and governments in both developed and developing countries.

The note covers three points:

  1. How to recognise a complex situation what challenges it will pose
  2. Principles for planning in the face of complexity
  3. Examples of planning approaches that address complexity”

Rick Davies comment: Over two hundred years ago William Blake exclaimed in verse “Pray God us keep From Single vision & Newton’s sleep”  If he was to read the current literature on complexity, planning and evaluation he might be tempted to repeat his advice, again and again, until it seeped through. Why do I think this? I searched this ODI paper for three magic words: diversity, difference and variation. Their existence in real life is the raw fuel for evolutionary processes, one that has enabled living organisms to survive amidst radically changeable environments over aeons of time on earth. And lo and behold, most of these organisms dont seem to have much in the way of planning units or strategy formulation processes. Evolution is a remarkably effective but non-teleological (i.e. goal driven) process of innovation and adaptation.

While I did not find the words diversity and variation in the ODI text, I was pleased for find one brief reference to the use of evolutionary processes, as follows:

Another option is an ‘evolutionary’ approach, whereby a plan is not seen as single ‘big bet’ but rather as a portfolio of experiments, by setting an over-riding goal and then pursuing a diverse sets of plans simultaneously, each of which has the potential to evolve. We could also adopt a ‘breadth first’ approach with ‘trial and error’ as the central aim of the initial stage of implementation, to encourage parallel testing of a variety of small-scale interventions”

One means of ensuring sufficient diversity in experiments is to decentralise resources and the control over those resources. This can happen in projects which have explicit empowerment objectives and also in other kinds of projects that are large in scale and working in a diversity of environments, where central controls can be loosened, either by accident or intention. In my experience there are already plenty of natural experiments with experimentation underway, the problem is the failure to capitalise on them. One reason being the continued fixation with a single vision, that is, an over-arching Theory of Change, embeded in a LogFrame and/or other planning formats, which end up dominating evaluators’ attention and use of time. This includes my own evaluation practice, mea culpa, notably with four projects in Indonesia between 2005 and 2010.

The alternative is to develop testable models that incorporate mulliple causal pathways. In the past I have emphasised the potential of network models of change, where changes can be affected via multiple influence pathways within complex networks of relationships between different actors. The challenge with this approach is to develop adequate descriptions of those networks and the pathways within them. More recently I have been argueing for the use of a simpler representational device, known as Decision Tree models, which can be constructed, and triangulated, using a variety of means (QCA, data mining algorithms, participatory and ethnographic techniques). The characteristics of a portfolio of diverse activities can be summarised in the form of Decision Tree models, which can then be tested for their  degree of fit with observed differences in outcomes of those activities. The structure of Decision Tree models enables them to represent multiple configurations of different causal conditions, identified before and/or after their implementation. More information on their design and use is provided in this paper “Where there is no single Theory of Change: The uses of Decision Tree models” While I have shared this paper with various writers on evaluation and complexity, none seem to have seen its relevance to complexity issues, possibly because in many writings on complexity, the whole issue of diversity gets much less attention than the issue of unpredictablity. I say this with some hesitation, since Ben Ramalingam’s forthcoming book on complexity does have a whole section on the perils of “Best-practicitis” i.e single vision views of development.

Incidentally, for an interesting but demanding read on the many relationships between diversity and complexity I recommend Scott Page’s “Diversity and Complexity” (2011)

 

Human Rights and Impact Assessment

Special Issue of Impact Assessment and Project Appraisal, Volume 31, Issue 2, 2013

  • Boele, Richard, and Christine Crispin. 2013. “What Direction for Human Rights Impact Assessments?” Impact Assessment and Project Appraisal 31 (2): 128–134. doi:10.1080/14615517.2013.771005.
  • Collins, Nina, and Alan Woodley. 2013. “Social Water Assessment Protocol: a Step Towards Connecting Mining, Water and Human Rights.” Impact Assessment and Project Appraisal 31 (2): 158–167. doi:10.1080/14615517.2013.774717.
  • Hanna, Philippe, and Frank Vanclay. 2013. “Human Rights, Indigenous Peoples and the Concept of Free, Prior and Informed Consent.” Impact Assessment and Project Appraisal 31 (2): 146–157. doi:10.1080/14615517.2013.780373.
  • ———. 2013b. “Human Rights and Impact Assessment.” Impact Assessment and Project Appraisal 31 (2): 85–85. doi:10.1080/14615517.2013.791507.
  • Sauer, Arn Thorben, and Aranka Podhora. 2013. “Sexual Orientation and Gender Identity in Human Rights Impact Assessment.” Impact Assessment and Project Appraisal 31 (2): 135–145. doi:10.1080/14615517.2013.791416.
  • Watson, Gabrielle, Irit Tamir, and Brianna Kemp. 2013. “Human Rights Impact Assessment in Practice: Oxfam’s Application of a Community-based Approach.” Impact Assessment and Project Appraisal 31 (2): 118–127. doi:10.1080/14615517.2013.771007.

See also Gabrielle Watson’s related blog posting: Trust but verify: Companies assessing their own impacts on human rights? Oxfam’s experience supporting communities to conduct human rights impact assessments

And docs mentioned in her post:

  • the United Nations Guiding Principles on Business and Human Rights in 2011
  • Oxfam’s community-based Human Rights Impact Assessment (HRIA) tool, Getting it Right,The tool was first tested in the Philippines, Tibet, the Democratic Republic of Congo, Argentina and Peru, and then improved. In 2010 and 2011, Oxfam supported local partner organizations to conduct community-based HRIAs with tobacco farmworkers in North Carolina and with mining-affected communities in Bolivia. In our experience, community-based HRIAs have: (1) built human rights awareness among community members, (2) helped initiate constructive engagement when companies have previously ignored community concerns, and (3) led to concrete actions by companies to address concerns.

Evaluation of Humanitarian Action: A Pilot Guide

Now available at the ALNAP website

“The Evaluating Humanitarian Action Guide supports evaluation specialists and non-specialists in every stage of an evaluation, from initial decision to final dissemination.

Here are six reasons we think it’s time for a comprehensive EHA guide:
1. Official donor assistance for humanitarian action has increased  nearly six times in real terms from 1990 to 2011.
2. More interest and investment in evaluations as concerns are raised about effectiveness of development aid and humanitarian relief.
3. A critical mass of collective knowledge now exists to build on – ALNAP’s evaluation database alone contains over 500 covering the last decade.
4. Commissioning of evaluations has shifted from agency headquarters  to field-based staff as agencies decentralise – yet field-based managers often have little experience in planning and managing evaluations, especially EHA.
5. Little evidence that evaluation results lead to change of, or reflection on, policy and practice – better designed evaluations could provide more compelling evidence for policy change and promote utilisation.
6. The demand for guidance on EHA is growing – a Humanitarian Practice  Network member survey in 2009 found that the number one guidance material requests were for EHA.

This ALNAP guide provides practical and comprehensive guidance and good practice examples to those planning, designing, carrying  out, and using evaluations of humanitarian action.
The focus is on utilisation: to encourage you to consider how to ensure from the outset that an evaluation will be used.
This guide attempts to support high-quality evaluations that contribute to improved performance by providing the best evidence possible of what is working well, what is not, and why. The ultimate goal is to better meet the needs of people affected by humanitarian crises, who will be referred to throughout this guide as the affected population.”

Data preparation and analysis in rapid needs assessments

“What is the data analyst to do when he is handed a dataset over whose design and formatting he had little control or none? For the Assessment Capacities Project (ACAPS) in Geneva, Aldo Benini wrote two technical briefs – “How to approach a dataset – Part 1: Data preparation” and “Part 2: Analysis”.

The target audience are rapid needs assessment teams, who often work under high time pressure. Yet analysts in project monitoring units, evaluators and trainers too may find the tools and process logic useful. Two macro-enabled Excel workbooks (for part 1and part 2) show the train of preparation steps as well as a variety of frequently needed analysis forms.

These notes speak to “one case – one record” data situations, which are typical of most surveys and assessments. For the special challenges that “one case – many records” datasets offer, see an example further down on the same page.”

IFAD’s independent evaluation ratings database

(found via IFAD posting on Xceval)

[from the IFAD website] “The Independent Office of Evaluation of IFAD (IOE) is making publicly available all the ratings on the performance of IFAD-supported operations evaluated since 2002.  As such, IOE joins the few development organizations that currently make such data available to the public at large. The broader aim of disclosing such evaluation data is to further strengthen organizational accountability and transparency (in line with IFAD’s Disclosure and Evaluation Policies), as well as enable others interested (including researches and academics) to conduct their own analysis based on IOE data.

All evaluation ratings may be seen in the Excel database. At the moment, the database contains ratings of 170 projects evaluated by IOE. These ratings also provide the foundation for preparing IOE’s flagship report, the Annual Report on Results and Impact of IFAD operations (ARRI).

As in the past, IOE will continue to update the database annually by including ratings from new independent evaluations conducted each year based on the methodology captured in the IFAD Evaluation Manual. It might be useful to underline that IOE uses a six-point rating scale (where 6 is the highest score and 1 the lowest) to assess the performance of IFAD-funded operations across a series of internationally recognised evaluation criteria (e.g., relevance, effectiveness, efficiency, rural poverty impact, sustainability, gender, and others).

Moreover, in 2006, IOE’s project evaluation ratings criteria were harmonized with those of IFAD’s operations, to ensure greater consistency between independent and self-evaluation data (Agreement between PMD and IOE on the Harmonization of Self-Evaluation and Independent Evaluation Systems of IFAD). The Harmonization agreement was further enhanced in 2011, following the Peer Review of IFAD’s Office of Evaluation and Evaluation Function. The aforementioned agreements also allow to determine any ‘disconnect’ in the reporting of project performance respectively by IOE and IFAD management.”

Perception surveys in fragile and conflict affected states

GSDRC Help Desk Research Report, Siân Herbert  25.03.2013. Available as pdf

Question : What recent work has been done on assessing the quality and limitations of using perception surveys in fragile and conflict affected states?
Contents (10 pages in all)
1. Overview
2. Strengths of perception surveys
3. Limitations of perception surveys
4. Methodological approaches to ensure quality in perception surveys
5. References

 

Evaluating simple interventions that turn out to be not so simple

Conditional Cash Transfer (CCT) programs have been cited in the past as examples of projects that are suitable for testing via randomised control trials. They are relatively simple interventions that can be delivered in a standardised manner. Or so it seemed.

Last year Lant Pritchett, Salimah Samji and Jeffrey Hammer wrote this interesting (if at times difficult to read) paper “It’s All About MeE: Using Structured Experiential Learning (‘e’) to Crawl the Design Space“(the abstract is reproduced below). In the course of that paper they argued that CCT programs are not as simple as they might seem. Looking at three real life examples they identified at least 10 different characteristics of CCTs that need to be specified correctly in order for them to work as expected. Some of these involve binary choices (whether to do x or y) and some involve tuning of a numerical variable. This means there were at least 2 to the power of 10 i.e. 1024 different possible designs. They also pointed out that while changes to some of these characteristics make only a small difference to the results achieved others, including some binary choices, can make quite major differences. In other words, overall it may well be a rugged rather than a smooth design space. The question then occurs, how well are RCTs suited to exploring such spaces?

Today the World Bank Development Blog posted an interesting confirmation of the point made in Pritchett et al paper, in a blog posting titled:  Defining Conditional Cash Transfer Programs: An Unconditional Mess. Basically they are in effect pointing out that the design space is even way more complicated than Princhett et al describe!. They conclude

So, if you’re a donor or a policymaker, it is important not to frame your question to be about the relative effectiveness of “conditional” vs. “unconditional” cash transfer programs: the line between these concepts is too blurry. It turns out that your question needs to be much more precise than that. It is better to define the feasible range of options available to you first (politically, ethically, etc.), and then go after evidence of relative effectiveness of design options along the continuum from a pure UCT to a heavy-handed CCT. Alas, that evidence is the subject of another post…

So stay tuned fore their next installment. Of course you could quibble with the fact that even this conclusion is a bit optimistic, in that it talks about a a continuum of design options, when in fact it is multi-dimensional space  with both smooth and rugged bits

PS: Here is the abstract for the Printchett paper:

“There is an inherent tension between implementing organizations—which have specific objectives and narrow missions and mandates—and executive organizations—which provide resources to multiple implementing organizations. Ministries of finance/planning/budgeting allocate across ministries and projects/programmes within ministries, development organizations allocate across sectors (and countries), foundations or philanthropies allocate across programmes/grantees. Implementing organizations typically try to do the best they can with the funds they have and attract more resources, while executive organizations have to decide what and who to fund. Monitoring and Evaluation (M&E) has always been an element of the accountability of implementing organizations to their funders. There has been a recent trend towards much greater rigor in evaluations to isolate causal impacts of projects and programmes and more ‘evidence base’ approaches to accountability and budget allocations Here we extend the basic idea of rigorous impact evaluation—the use of a valid counter-factual to make judgments about causality—to emphasize that the techniques of impact evaluation can be directly useful to implementing organizations (as opposed to impact evaluation being seen by implementing organizations as only an external threat to their funding). We introduce structured experiential learning (which we add to M&E to get MeE) which allows implementing agencies to actively and rigorously search across alternative project designs using the monitoring data that provides real time performance information with direct feedback into the decision loops of project design and implementation. Our argument is that within-project variations in design can serve as their own counter-factual and this dramatically reduces the incremental cost of evaluation and increases the direct usefulness of evaluation to implementing agencies. The right combination of M, e, and E provides the right space for innovation and organizational capability building while at the same time providing accountability and an evidence base for funding agencies.” Paper available as pdf

I especially like this point  about within-project variation (on which I have argue for in the past): “Our argument is that within-project variations in design can serve as their own counter-factual and this dramatically reduces the incremental cost of evaluation and increases the direct usefulness of evaluation to implementing agencies

 

US Govt Executive Order — Making Open and Machine Readable the New Default for Government Information

(from The White House,  Office of the Press Secretary, For Immediate Release, May 09, 2013)

Executive Order — Making Open and Machine Readable the New Default for Government Information

EXECUTIVE ORDER

– – – – – – –

MAKING OPEN AND MACHINE READABLE THE NEW DEFAULT
FOR GOVERNMENT INFORMATION

By the authority vested in me as President by the Constitution and the laws of the United States of America, it is hereby ordered as follows:

Section 1. General Principles. Openness in government strengthens our democracy, promotes the delivery of efficient and effective services to the public, and contributes to economic growth. As one vital benefit of open government, making information resources easy to find, accessible, and usable can fuel entrepreneurship, innovation, and scientific discovery that improves Americans’ lives and contributes significantly to job creation.

Decades ago, the U.S. Government made both weather data and the Global Positioning System freely available. Since that time, American entrepreneurs and innovators have utilized these resources to create navigation systems, weather newscasts and warning systems, location-based applications, precision farming tools, and much more, improving Americans’ lives in countless ways and leading to economic growth and job creation. In recent years, thousands of Government data resources across fields such as health and medicine, education, energy, public safety, global development, and finance have been posted in machine-readable form for free public use on Data.gov. Entrepreneurs and innovators have continued to develop a vast range of useful new products and businesses using these public information resources, creating good jobs in the process.

To promote continued job growth, Government efficiency, and the social good that can be gained from opening Government data to the public, the default state of new and modernized Government information resources shall be open and machine readable. Government information shall be managed as an asset throughout its life cycle to promote interoperability and openness, and, wherever possible and legally permissible, to ensure that data are released to the public in ways that make the data easy to find, accessible, and usable. In making this the new default state, executive departments and agencies (agencies) shall ensure that they safeguard individual privacy, confidentiality, and national security.

Sec. 2. Open Data Policy. (a) The Director of the Office of Management and Budget (OMB), in consultation with the Chief Information Officer (CIO), Chief Technology Officer (CTO), and Administrator of the Office of Information and Regulatory Affairs (OIRA), shall issue an Open Data Policy to advance the
management of Government information as an asset, consistent with my memorandum of January 21, 2009 (Transparency and Open Government), OMB Memorandum M-10-06 (Open Government Directive), OMB and National Archives and Records Administration Memorandum M-12-18 (Managing Government Records Directive), the Office of Science and Technology Policy Memorandum of February 22, 2013 (Increasing Access to the Results of Federally Funded Scientific Research), and the CIO’s strategy entitled “Digital Government: Building a 21st Century Platform to Better Serve the American People.” The Open Data Policy shall be updated as needed.

(b) Agencies shall implement the requirements of the Open Data Policy and shall adhere to the deadlines for specific actions specified therein. When implementing the Open Data Policy, agencies shall incorporate a full analysis of privacy, confidentiality, and security risks into each stage of the information lifecycle to identify information that should not be released. These review processes should be overseen by the senior agency official for privacy. It is vital that agencies not release information if doing so would violate any law or policy, or jeopardize privacy, confidentiality, or national security.

Sec. 3. Implementation of the Open Data Policy. To facilitate effective Government-wide implementation of the Open Data Policy, I direct the following:

(a) Within 30 days of the issuance of the Open Data Policy, the CIO and CTO shall publish an open online repository of tools and best practices to assist agencies in integrating the Open Data Policy into their operations in furtherance of their missions. The CIO and CTO shall regularly update this online repository as needed to ensure it remains a resource to facilitate the adoption of open data practices.

(b) Within 90 days of the issuance of the Open Data Policy, the Administrator for Federal Procurement Policy, Controller of the Office of Federal Financial Management, CIO, and Administrator of OIRA shall work with the Chief Acquisition Officers Council, Chief Financial Officers Council, Chief Information Officers Council, and Federal Records Council to identify and initiate implementation of measures to support the integration of the Open Data Policy requirements into Federal acquisition and grant-making processes. Such efforts may include developing sample requirements language, grant and contract language, and workforce tools for agency acquisition, grant, and information management and technology professionals.

(c) Within 90 days of the date of this order, the Chief Performance Officer (CPO) shall work with the President’s Management Council to establish a Cross-Agency Priority (CAP) Goal to track implementation of the Open Data Policy. The CPO shall work with agencies to set incremental performance goals, ensuring they have metrics and milestones in place to monitor advancement toward the CAP Goal. Progress on these goals shall be analyzed and reviewed by agency leadership, pursuant to the GPRA Modernization Act of 2010 (Public Law 111-352).

(d) Within 180 days of the date of this order, agencies shall report progress on the implementation of the CAP Goal to the CPO. Thereafter, agencies shall report progress quarterly, and as appropriate.

Sec. 4. General Provisions. (a) Nothing in this order shall be construed to impair or otherwise affect:
(i) the authority granted by law to an executive department, agency, or the head thereof; or

(ii) the functions of the Director of OMB relating to budgetary, administrative, or legislative proposals.

(b) This order shall be implemented consistent with applicable law and subject to the availability of appropriations.

(c) This order is not intended to, and does not, create any right or benefit, substantive or procedural, enforceable at law or in equity by any party against the United States, its departments, agencies, or entities, its officers, employees, or agents, or any other person.

(d) Nothing in this order shall compel or authorize the disclosure of privileged information, law enforcement information, national security information, personal information, or information the disclosure of which is prohibited by law.

(e) Independent agencies are requested to adhere to this order.

BARACK OBAMA

 

%d bloggers like this: