Hard Problems in Data Science: The Role of Humans

How do humans fit into the data science equation? With the advent of machine learning and artificial intelligence, it sometimes seems that human insights or human input are becoming totally superfluous: the machine knows best, unhindered as it is by biases. During the third of four discussion sessions on hard problems in data science Professor Roger Leenders from Tilburg University and Professor Chris Snijders from Eindhoven University of Technology dare to disagree on our human role in a future steered by algorithms.

Professor Roger Leenders — Using insights into human behavior to solve hard problems
‘When thinking about hard problems in data science, the first question that arises is what are the hard problems for society in general? Think of the imbalance of power in a multi-polar world, diversity, organized crime, terrorism, collective decision-making, happiness and well-being, health and sustainability, to name but a few issues. It is my contention that these hard problems share one common denominator: they all revolve around human behavior and how humans (re)act both individually and within a group.’

‘What I see in the area of data science is that a lot of successful work has been done getting and combining datasets, as well as building tools and methods to analyze data. What I also see, is that the main challenge for data science is not to revel in the sheer volume of data that is available or in the awesomeness of models and methods. Rather, data scientists must remember that it’s addressing questions that matter that’s important. And questions that matter are the ones that I’ve just named, all of which have human behavior as their essence. This is where social science and data science meet: social sciences are struggling with these questions and data science has a lot to offer in helping to find the answers or just to find new ways of looking at these problems. Using data science to understand and predict human behavior we can then create interventions that have real impact on real people.’

‘In the social sciences we have so many general theories that explain how things work, but they are too ill-specified, use strict assumptions, and are typically based on a single discipline, and are time-insensitive. Such theories allow us to explain things after the fact, but are largely useless in predicting human behavior (who does what with whom, where and when?). Many social scientists think that collecting more data is the answer. Rather, we should have better (not more) data and better (not more) theory. We should be using data science to help us look at the how.’

‘One promising approach would be to study smaller (rather than larger) sets of individuals, say one hundred individuals instead of one million, and then collect extremely ‘high’ resolution data (facial expressions + physiological measurements + questionnaires + GPS data + phone logs et cetera). I call this big micro data: integrated longitudinal data from many disciplines. By looking at all these measurements we are able to really see what kind of events trigger what type of behaviors and how and why and for how long. Preferably, such research would be carried out with people in the wild: all fully time-logged and over an extended period of time. Funding for this type of data collection is huge, especially in the USA where they are not only adamant on being able to predict what is happening in the minds of terrorists, but also in the minds of their own soldiers. We are at the stage that we have the theories about how a discussion in a given group of individuals will develop, but we still lack a theory about what is going on in your head at that same time. Let alone that we are using data and methods to test these theories in conjunction. I’m convinced that analyzing data of smaller sets of individuals, but more widely scoped and at much higher resolution will lead to better theory.’

‘One of the reasons why the social science theories we have are so weak can be explained by the so-called Law of the Hammer: It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.’ This is where, Leenders believes, data science can make the difference: ‘Data science allows us to break away from the burden of methodology. We need no longer be limited by small sets of models. Some social scientists fear that data science might come at the expense of social science theory. However, I believe the opposite is true. Data science will in fact emancipate social science theory by helping it break free from all of the overly strict modeling assumptions we have had to make for decades. We advance social science theories and intervention development through cross-fertilization. Social scientists are no longer limited to hammers and nails! Hammers and nails aren’t enough to solve real problems. A more extensive flexible toolbox gets us much further.’

Professor Chris Snijders — Why human intervention is overrated in model-based predictions
Agreeing to disagree with Leenders, Snijders continues the second half of the discussion with the observation that much of model-based prediction works with the assumption that there is a need for a human in the loop. ‘I think in many cases such human intervention is not required!’ Snijders remarks. ‘Having said that, though, in a certain sense, there is always a human involved in a loop. A model doesn’t just appear out of the blue. But, I am convinced that, as soon as a decent prediction model has been found, in many cases human interference is totally futile. Evidence for the usefulness of human interference exists, but is really rare. So, if a decent model has been found, first compare it fairly with human judgment and if the model wins, go for it. This is especially true for models with data input and a more or less single-value, point prediction (e.g. will you get that disease or not). In a substantial number of cases, models win and the benefits of monitoring futile. Nevertheless, in the real world we see that such monitoring is carried out for nearly all models.’

In fact, it’s surprising how little discussion there is about the role of humans in modeling. I personally believe much more in the power of using experts at the beginning of a modeling process! Humans can point the way as to where to look for knowledge in data. But why do models that clearly outperform humans still have human monitoring at the end? Perhaps because people don’t trust the system and prefer a human, regardless of their usefulness. Take the Underground in London: it’s totally self-driving, yet there still are drivers on all trains. One might hope that there is something useful to do for them, but I think the main reason is that it makes people feel safe.’

‘Is human intervention really required? To be able to answer this question, we need to look at whether interventions improve outcomes. An outcome can alos be that we feel better, safer or whatever with a human at the end of the loop. But, there is a limit to this. We don’t want human intervention to be detrimental to the outcome, at least not too much. The tendency is to only implement a model if it is perfect, rather than implementing models in the cases where computers do better than humans.’

‘Let me expand on this just a bit. We have human and mechanical predictions. We do not know what a good model is, but then again, we don’t know what a good expert is either! What we should be doing is comparing model-based outcomes to expert based outcomes and use whatever is better. The question should be: is the model better able to predict something than human experts are? Despite the evidence being very much against the latter, we as humans are inclined to question the assessments made by models much more than we are to question the assessments by humans.’

Moving on to the value of data as such, Snijders remarks that: ‘We should make a difference between the amount of data you might have and actual measurement of a concept of interest. We have a huge volume of data sets and we are happy to work with them, but we often get data points that are irrelevant or that we aren’t really interested in. For instance, we have 17 or more sensors in our mobile phones. Collecting all the data from the sensors won’t actually tell how, say, happy I was during the day, at least this seems quite far-fetched to me. As soon as we have a hunch whether the data contains what we want to measure, by all means: collect it. Your measurements should fit your theories. Exploratory research should therefore be done within certain clear boundaries that specify beforehand, at least to some extent, what you are looking for.’

‘One of the hardest data science problems remaining is the implementation problem. This is way bigger than the modeling problem. We have loads of models that are extremely accurate or at least more accurate than anything else we know. In fact, we see that if we compare expert based prediction versus model-based prediction on equal grounds, in one third of the cases the models outperforms the experts. You’d expect these winning models to have been implemented, yet they often aren’t. Why not? It has everything to do with our human mindset (I know best) and such questions as: If the model makes a mistake who is to blame? In fact, my feeling is that appropriateness of the model is not related much to whether the model will be implemented.’
‘This is another aspect where human intervention in data science becomes extremely important: we need humans to ‘sell’ model-based predictions. We now have extremely useful and decent models that are not used! Ironically, we also see the opposite happening: models that we know are not particularly accurate are readily implemented and widely accepted. Take the weather forecast: it is impossible to predict the weather a month ahead, yet that is exactly what we do and what websites offer, based on a non-accurate model.’

‘Lastly, I’d like to mention transparency in data science. I regard transparency as being mainly of interest for model makers: to help improve the model and its predictions. That those who experience the model predictions need to know how the model prediction came about I find much less convincing. Let me ask you, which treatment would you choose: one based on a model that is completely transparent with 80% accuracy, or one that is not transparent with 95% accuracy? Where transparency can help too, is in selling our models: to explain what the benefits are and why people should accept it.’




Citizen Capitalist | @seldondigital — @jadatascience — #lijst12

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Causal Inference — A Brief Introduction

Use Seaborn to Do Beautiful Plots Easy!

9 Best Free Data Visualization Courses & Certification

The Transformer: Key Takeaways

Beginner’s Guide to Gradient Descent

Why Choose Azure for Data Engineering?

Simply understanding the basics of BIG data

From Recall to 2nd Generation Device

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Arjan Haring

Arjan Haring

Citizen Capitalist | @seldondigital — @jadatascience — #lijst12

More from Medium

Are you targeting the right customers with your campaign offers ?

What happens when a single piece of data causes us to misread the bigger identity?

Digital Eye: A Clear Guide to Predicting Probabilities

Digital Eye: A Clear Guide to Predicting Probabilities

IQ Test — How It Works and Why It’s Used