Recently noted papers of interest on my Twitter feed:
- Go Forth and Replicate: On Creating Incentives for Repeat Studies. Scientists have few direct incentives to replicate other researchers’ work, including precious little funding to do replications. Can that change? 09.11.2017 / BY Michael Schulson
- “A survey of 1,500 scientists, conducted by the journal Nature last year, suggested that researchers often weren’t telling their colleagues — let alone publishing the results — when other researchers’ findings failed to replicate.”… “Each year, the [US] federal government spends more than $30 billion on basic scientific research. Universities and private foundations spend around $20 billion more, according to one estimate. Virtually none of that money is earmarked for research replication”…”In reality, major scientific communities have been beset these last several years over inadequate replication, with some studies heralded as groundbreaking exerting their influence in the scientific literature — sometimes for years, and with thousands of citations — before anyone bothers to reproduce the experiments and discover that they don’t hold water. In fields ranging from cancer biology to social psychology, there’s mounting evidence that replication does not happen nearly enough. The term “replication crisis” is now well on its way to becoming a household phrase.”
- WHEN GOVERNMENT RULES BY SOFTWARE, CITIZENS ARE LEFT IN THE DARK. TOM SIMONITE, WIRED, BUSINESS, 08.17.1707:00 AM
- “Most governments the professors queried didn’t appear to have the expertise to properly consider or answer questions about the predictive algorithms they use”…”Researchers believe predictive algorithms are growing more prevalent – and more complex. “I think that probably makes things harder,” says Goodman.”…”Danielle Citron, a law professor at the University of Maryland, says that pressure from state attorneys general, court cases, and even legislation will be necessary to change how local governments think about, and use, such algorithms. “Part of it has to come from law,” she says. “Ethics and best practices never gets us over the line because the incentives just aren’t there.”
- The evolution of machine learning. Posted Aug 8, 2017 by Catherine Dong (@catzdong) TechCrunch
- “Machine learning engineering happens in three stages — data processing, model building and deployment and monitoring. In the middle we have the meat of the pipeline, the model, which is the machine learning algorithm that learns to predict given input data.The first stage involves cleaning and formatting vast amounts of data to be fed into the model. The last stage involves careful deployment and monitoring of the model. We found that most of the engineering time in AI is not actually spent on building machine learning models — it’s spent preparing and monitoring those models.Despite the focus on deep learning at the big tech company AI research labs, most applications of machine learning at these same companies do not rely on neural networks and instead use traditional machine learning models. The most common models include linear/logistic regression, random forests and boosted decision trees.”
- The Most Crucial Design Job Of The Future. What is a data ethnographer, and why is it poised to become so important? 2017.7.24 BY CAROLINE SINDERS. Co-Design
- Why we need meta data (data about the data we are using). “I advocate we need data ethnography, a term I define as the study of the data that feeds technology, looking at it from a cultural perspective as well as a data science perspective”…”Data is a reflection of society, and it is not neutral; it is as complex as the people who make it.”
- The Mystery of Mixing Methods. Despite significant progress on mixed methods approaches, their application continues to be (partly) shrouded in mystery, and the concept itself can be subject to misuse. March 28, 2017 By Jos Vaessen. IEG
- “The lack of an explicit (and comprehensive) understanding of the principles underlying mixed methods inquiry has led to some confusion and even misuses of the concept in the international evaluation community.”
- Three types of misuse (
- Five valid reasons for using mixed methods: (Triangulation, Complementarity, Development, Initiation, Expansion)
- To err is algorithm: Algorithmic fallibility and economic organisation. Wednesday, 10 May 2017. NESTA
- We should not stop using algorithms simply because they make errors. Without them, many popular and useful services would be unviable. However, we need to recognise that algorithms are fallible and that their failures have costs. This points at an important trade-off between more (algorithm-enabled) beneficial decisions and more (algorithm-caused) costly errors. Where lies the balance?Economics is the science of trade-offs, so why not think about this topic like economists? This is what I have done ahead of this blog, creating three simple economics vignettes that look at key aspects of algorithmic decision-making. These are the key questions:Risk: when should we leave decisions to algorithms, and how accurate do those algorithms need to be?
Supervision: How do we combine human and machine intelligence to achieve desired outcomes?
Scale: What factors enable and constrain our ability to ramp-up algorithmic decision-making?
- We should not stop using algorithms simply because they make errors. Without them, many popular and useful services would be unviable. However, we need to recognise that algorithms are fallible and that their failures have costs. This points at an important trade-off between more (algorithm-enabled) beneficial decisions and more (algorithm-caused) costly errors. Where lies the balance?Economics is the science of trade-offs, so why not think about this topic like economists? This is what I have done ahead of this blog, creating three simple economics vignettes that look at key aspects of algorithmic decision-making. These are the key questions:Risk: when should we leave decisions to algorithms, and how accurate do those algorithms need to be?
- A taxonomy of algorithmic accountability. Cory Doctorow / 6:20 am Wed May 31, 2017 Boing Boing
- “Eminent computer scientist Ed Felten has posted a short, extremely useful taxonomy of four ways that an algorithm can fail to be accountable to the people whose lives it affects: it can be protected by claims of confidentiality (“how it works is a trade secret”); by complexity (“you wouldn’t understand how it works”); unreasonableness (“we consider factors supported by data, even when you there’s no obvious correlation”); and injustice (“it seems impossible to explain how the algorithm is consistent with law or ethics”)”
Hi Rick,
Thanks for this post. The sections on algorithms reminded me of an excellent podcast episode. The podcast is called ‘99% invisible’, and the episode last week was on ‘Weapons of Math Destruction’ … essentially making the case for how problematic algorithm design can be. Featuring unintended negative consequences wrt race, sentencing and the criminal justice system, for instance. Worth a listen!
http://99percentinvisible.org/episode/the-age-of-the-algorithm/
Hi Sam. Thanks for the link, which I will follow
There is a book of the same name by Cathy ONeil, which I have read (‘Weapons of Math Destruction’ ) https://weaponsofmathdestructionbook.com/
rgds, rick