Why have evaluators been slow to adopt big data analytics?

This is a question posed by Michael Bamberger in his blog posting on the MERL Tech website, titled Building bridges between evaluators and big data analysts. There he puts forward eight reasons (4 main ones and 4 subsidiary points). None of which I disagree with. But I have my own perspective on the same question and posted the following points as a Comment underneath his blog posting.

My take on “Why have evaluators been slow to adopt big data analytics?”

1. “Big data? I am having enough trouble finding any useful data! How to analyse big data is ‘a problem we would like to have’” This is what I suspect many evaluators are thinking.

2. “Data mining is BAD” – because data mining is seen as by evaluators something that is ad hoc and non-transparent. Whereas the best data mining practices are systematic and transparent.

3. “Correlation does not mean causation” – many evaluators have not updated this formulation to the more useful “Association is a necessary but insufficient basis for a strong causal claim”

4. Evaluators focus on explanatory models and do not give much attention to the uses of predictive models, but both are useful in the real world, including the combination of both. Some predictive models can become explanatory models, through follow-up within-case investigations.

5. Lack of appreciation of the limits of manual hypothesis formulation and testing (useful as it can be) as a means of accumulating knowledge. In a project with four outputs and four outcomes there can be 16 different individual causal links between outputs and outcomes, but 2 to the power of 16 possible combinations of these causal links. That’s a lot of theories to choose from (65,536). In this context, search algorithms can be very useful.

6. Lack of knowledge and confidence in the use of machine learning software. There is still work to be done to make this software more user friendly. Rapid Miner, BigML, and EvalC3 are heading in the right direction.

7. Most evaluators probably don’t know that you can use the above software on small data sets. They don’t only work with large data sets. Yesterday I was using EvalC3 with a data set describing 25 cases only.

8. The difficulty of understanding some machine learning findings. Decision tree models (one means of machine learning) are eminently readable, but few can explain the internal logic of specific prediction models generated by artificial neural networks (another means of machine learning, often used for classification of images). Lack of explainability presents a major problem for public accountability. Public accountability for the behavior and use of algorithms is shaping up to be a BIG issue, as highlighted in this week’s Economist Leader article on advances in facial recognition software: What machines can tell from your face

Update: 2017 09 19: See Michael Bamberger’s response to my comments above in the Comment section below. They are copied from his original response posted here http://merltech.org/building-bridges-between-evaluators-and-big-data-analysts/

 

 

One thought on “Why have evaluators been slow to adopt big data analytics?”

  1. (copied from http://merltech.org/building-bridges-between-evaluators-and-big-data-analysts/)

    Dear Rick,

    Thank you for your very interesting comments. These are all important issues so I have responded in some detail. Hopefully other readers might join in the discussion.

    Point 1. While I am sure that the concern of many evaluators is lack of data and this group would love to have access to big data, it has been my experience that many evaluators are not very familiar with big data and others have concerns about who generates and owns big data. The 2017 chapter by Hojlund et al “The current use of big data in evaluation” (in the publication that I cited by Petersonn and Breuel estimated that only about 50 per cent of evaluators were familiar with the basic principles of big data and only about 10 per cent claimed to have used big data in one of their evaluations.

    There are also situations where evaluators could potentially have access to more data but they are not sure how they could analyze it (for technical, time or budget reasons. So there are obviously different scenarios for understanding evaluators attitude to, or knowledge about big data. This latter group has several concerns. First, the fact that many, perhaps most, apps are developed for profit makes some evaluators worry about whether the use of these apps in development programs will lead to some form of exploitation of poor and vulnerable groups for profit. Another concern is whether big data will be used by funding agencies and governments to disempower the poor. The ability to collect data remotely means that it becomes possible to obtain data on and about the poor, without the need to consult with them, and in many cases without them even knowing the data is being used to make important decisions about their future. Finally, there is a concern that the nature of many of the algorithms used by the apps may have potential biases against the poor. Cathy O’Neill’s “The weapons of math destruction: How big data increases inequality and threatens democracy,” documents this concern.

    Points 2 and 3. Data mining, correlation and causation. Many evaluators have been taught that data mining, and the generation of spurious correlations are potentially bad. I agree with you that many evaluators assume that data mining is ad hoc, and assumed to be related to the perceived lack of a theoretical framework to guide the formulation of the analysis plan. The 2008 paper by Anderson “The end of theory: the data deluge makes the scientific method obsolete” and papers with similar titles contributed to this perception. There are also a number of publications on how predictive analytics are used for on-line marketing research that emphasize the use of data mining to identify factors correlated with consumer purchasing behavior (or in some cases increased click rates) that also suggest it is not necessary to understand why an association exists, as long as it helps the client increase sales. Siegel (2013) “Predictive analytics: The power to predict who will click, buy, lie or die” is an example of this approach. You are of course correct that data mining can be a rigorous approach based on a well-articulated analytical framework. But it is often not perceived this way by many evaluators – most of whom are not very familiar with predictive analytics.

    Point 4. I fully agree that there are great benefits to be achieved from combining explanatory and predictive models. The challenge is the need for bridge-building between evaluators with their explanatory models and data analysts with their predictive models. At present much of the discussion still assumes that the two approaches are competing or incompatible. What is needed are opportunities for the two groups to work together to explore possibilities for integrating the two approaches in the same study.

    Point 5. This is a very interesting point about the limitations of manual hypothesis testing. One approach is to start with theory-based approaches (manual hypotheses) to explore how far you can get and whether you can produce useful findings. This can then guide the use of quantitative search algorithms. One advantage of this two-step approach is that the exploration of these manual hypotheses, often complemented by in-depth qualitative research, can also help identify what kinds of information are required to test some of these hypotheses and to what extent this information can be generated from the available big data sets. Work in fields such as gender equality and women’s empowerment, equity or social exclusion often finds that critical information is not available in conventional data sets. In these cases special gender-responsive data may need to be generated from special studies. This kind of in-depth approach could usefully complement the quantitative approaches by identifying whether there are important kinds of information that are not immediately available from the big data sources that are being used for a particular study.

    This example reflects the interest of many researchers in adopting a mixed methods approach that combines big data with conventional evaluation methods.

    Point 6. I don’t think that machine learning (ML) has yet been taken-up by many evaluators. So it would be interesting if you have examples illustrating how ML can be applied in development evaluation. I agree that ML has tremendous potential, but what are the best entry points? This also goes back to your first point as ML usually requires the kinds of large data sets to which many evaluators do not have access. Various of my colleagues working in countries such as India suggest that central government agencies, and also some line ministries, have huge survey data sets most of which have not yet been fully exploited by evaluators. ML is one of the tools that could be very useful for working with these potentially very rich data sets. One of the big challenges is that there has been very little interest so far in finding ways to integrate different sectoral data sets, which could greatly enhance their value for evaluation purposes. This is an example of the “silo” effect, where many research professionals and development agencies only wish to work in their particular area.

    Point 7. The fact that ML and other data analytic tools can be used on relatively small data sets is important as evaluators frequently work with relatively small data sets. However, as you point out, many people assume that ML can only be used with large data sets, so it would be very helpful to provide examples showing the applicability with smaller data sets.

    Point 8. Your point about the difficulty of understanding the logic of many big data analytic tools is obviously still a barrier.

    Thank you again for your very stimulating comments.

    Regards

    Michael

Comments?

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: