Who’s Afraid of Administrative Data? Why administrative data can be faster, cheaper and sometimes better

Reprinted in full from the World Bank blog “Development Impact
Written by Laura Rawlings, 26 June 2013.

“In talking about the importance of generating evidence for policy making, we sometimes neglect to talk about the cost of generating that evidence — not to mention the years it can take. Impact evaluations are critical, but most are expensive, time consuming and episodic. Policymakers increasingly rely on evidence to make sound decisions, but they want answers within a year or at most two—and their budgets for evaluation are often limited. As the Bank moves forcefully into impact evaluations, the question is how to make them not only effective – but more accessible.

Administrative data is one solution and there are a number of benefits to using it. By relying on regularly collected microdata, researchers can work with policymakers to run trials, generating evidence and answering questions quickly. Using administrative data can save hundreds of thousands of dollars over the cost of running the surveys needed to collect primary data – the single biggest budget item in most impact evaluations.

The benefits go on: The quality, as well as frequency, of administrative data collection is continuing to improve. Countries have databases tracking not only inputs and costs, but outputs and even outcomes. Quality data are now available on everything from health indicators like vaccination rates to student attendance and test scores—and information can often be linked across databases with unique IDs, which gives us a treasure chest of information. Indeed, “big data” is a buzzword these days, and as we move forward into evidence building, it’s important to realize that “big data,” when used properly, can also mean “better data”—more frequent, timely, and less costly.

Administrative data is particularly beneficial in helping test program design alternatives. Alternative options can be tested and assessed to see what route is most effective—and cost-effective.

Of course there are drawbacks as well. Administrative data can only answer questions to which the data are suited, and this rarely includes in-depth analysis of areas such as behavioral changes or consumption patterns. A recent impact evaluation of the long-term effects of a conditional cash transfer program in Colombia, for example, provided rich information about graduation rates and achievement test scores—but little in the way of information about household spending or the usage of health services, for example. And the information provided is usually relevant to individual beneficiaries of a specific program—rather than on the household level or between beneficiaries and non-beneficiaries.

Administrative data are also often of questionable quality: institutional capacity varies across the agencies that gather and manage the data and protocols for ensuring data quality are often not in place. Another drawback is accessibility: administrative data may not be publically available or organized in a way that is easily analyzed.

Clearly, researchers need to evaluate the usefulness of administrative data on a case-by-case basis. Some researchers at the World Bank who have weighed the pros and cons have embraced it as an important tool, as we saw in the impact evaluation of the Colombia program, which relied exclusively on administrative data. This included census data, baseline data from a previous impact evaluation, and the program database itself, as well as information– registration numbers and results– from a national standardized test. Linking all these data gave researchers answers in just six months at about one-fifth of the cost of an impact evaluation that would require traditional primary data collection. An impact evaluation looking at the results of Plan Nacer, a results-based financing program for women and children in Argentina, has done largely the same thing.

There are numerous examples outside the World Bank as well. David Halperin, director of the UK’s Behavioral Insights Team– commonly called “The Nudge Unit” for their work in encouraging changes in behaviors —routinely relies on administrative data. Together with his team, Halperin, who was at the Bank in early May to talk about their work, has discovered ways to encourage people to pay their court fines (send a text message with the person’s name, but not the amount they owe) and to reduce paperwork fraud (put the signature box at the beginning, rather than the end of the form). The research they are leading on changing behaviors relies on data that the government already has—producing results that are reliable, affordable and quick.

How can we move ahead? First, we need to learn to value administrative data – it may not get you a publication in a lofty journal, but it can play a powerful role in improving program performance. Second, we have to help our clients improve the quality and availability of administrative data. Third, we need a few more good examples of how good impact evaluations can be done with administrative data. Moving to a more deliberate use of administrative data will take effort and patience, but the potential benefits make it worth prioritizing.”

Rick Davies comment: Amen! Monitoring has been the poor cousin of evaluation for years and even more so with the recent emphasis on impact evaluation. Yet without basic data that should be collected during project implementation, routinely by project staff, most evaluations will be stymied, delivering only a fraction of the findings they could deliver. In large, complex, decentralised development projects evaluators need to know who participated in, or was reached by, what activities. This data can and should be routinely collected by project staff, at least for management purposes. So should short term outcome data, like participant satisfaction and/or use of services provided. The fact that there may be no external control group is not necessarily a problem, if the intention is not to make overall generalisations about average or net effects, but is instead to explore internal variation in access and use. That is where the more immediately useful lessons will be, which will aid improvement in project deisgn and effectiveness.

There are two developments which magnify the long standing argument for careful collection and use of monitoring/admin data. One is the move towards greater aid transparency, which should be inclusive of this kind of data, making it examinable and usable by a much wider range of sorrounding/public stakeholders than traditionally conceived of in project designs. The other is developments in data mining methods that enable pattern seeking and rule finding in such data sets, which can extend our horizons beyond what what we hope may be there, traditionally explored by hypothesis testing aproaches (valuable as they can be)

 

One thought on “Who’s Afraid of Administrative Data? Why administrative data can be faster, cheaper and sometimes better”

  1. A refreshing read for the reasons Rick gave given the hegemony of evaluation in the aid industry. At last, a view that explains why and how monitoring has its own purpose; that is rather than taking on a role subserviant to evaluation. Done well, monitoring, through generating different types of admin data, can and should give primacy to helping those who manage, not simply measure and help donors account for, change. Such a purpose is not so much through measuring and reporting (on indicators and targets)rather through learning about the motivations, preferences and behaviours among those for whom aid is supposed to benefit. It is the rigour and empathy of how these processes are associated with monitoring that define very real and practical opportunities for those who manage; that is as opposed to procrastinating about how and in what ways surveys can isolate variables and people in the quest for rigour in ways defined by statisticans. Great post.

Comments?

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: