How Humans Judge Machines

A Q&A with César Hidalgo, author of How Humans Judge Machines

A discussion on the future of artificial intelligence 

With the growing role of artificial intelligence in society, humans are looking towards machines with increasingly critical eyes. In How Humans Judge Machines, César A. Hidalgo—a Chilean-Spanish-American scholar and Director of ANITI’s Center for Collective Learning at the University of Toulouse—and a team of social psychologists and roboticists use hard science to reveal the biases that permeate human-machine interactions.

How would you feel about losing your job to a machine? Would you react differently to acts of discrimination depending on whether they were carried out by a machine or by a human? Asking probing questions like these, How Humans Judge Machines compares people’s reactions to actions performed by humans and machines and brings us one step closer to understanding the ethical consequences of AI.

We spoke to Hidalgo to learn more about his research and his thoughts on the role of AI in our future.


"How Humans Judge Machines" The MIT Press: What is the importance of learning how humans make judgments about machines?
César Hidalgo: We live in a society that combines machines and humans. To properly understand when it is beneficial to include machines in people’s lives and workplaces, we need to understand our own biases on how we judge them. If we are too harsh on machines we will reject them unnecessarily. If we are too lenient, we will give them more power and responsibility than needed. Technophilia and technophobia are two extremes we need to avoid. By understanding how we judge machines in different scenarios we contribute to a more nuanced understanding of when, why, and how, we should incorporate machines into society.

The MIT Press: What inspired you to write this book?
César Hidalgo: This book tries to fill a gap in our literature. While much of the social sciences and humanities focus on interactions among people, there is relatively less work on understanding how humans judge machines. This changes when machines acquire the ability to learn, since this affects the level of agency, predictability, and responsibility that a machine can have. In recent years, there has been a growing literature on how machines are used to judge humans. This is interesting work with a lot of merit. But it is also important to understand how it is that humans judge machines. After all, humans are complex and judging is a very complicated cognitive operation. So it is in the link of humans judging machines where much of the complexity of the human machine interaction resides.

The MIT Press: In your research you found that, among many other things, people judge human actions by their intentions and machine actions by their outcomes. Why do you think this is?
César Hidalgo: This was an interesting finding that came out of data from dozens of experiments. While in the book I explore its potential causes and implications, I am not ready to say that I fully understand it. On the one hand, this finding may speak—in part—to a poor understanding of artificial intelligence. When presenting this book to an audience, I’ve found people often forget that AI is not mechanical, but involves an ability to learn. So they react as if machine behavior is nothing more than following predefined rules. This is an incorrect mental model for machines programmed to satisfy goals and figure out procedures. On the other hand, this may also speak to the way in which we judge humans. Intention is an important modifier of human judgment. It is the basis of criminal law, and the difference, for instance, between murder and manslaughter. Humans are always trying to read in between the lines and wondering what other people are trying to get out of a situation or what their angle is. So we are wired to try to figure out not what a person did, but why, and hence, we gravitate towards a model of intention when judging the actions of other humans.

The MIT Press: In your book you outline the distinction between a normative approach to research which focuses on how the world should be, and a positive approach which describes the world as it is. How Humans Judge Machines, you write, takes a positive approach. Why did you take this approach in your research?
César Hidalgo: Thinking about how the world should be, without knowing how it is, can be dangerous. It can lead to wrongful judgments and astray implications. As someone who grew up in the natural sciences, I have an enormous respect for human ignorance and for the ability of experiments to both surprise and illuminate us. When looking at the literature on AI Ethics, I saw many normative studies, but little data on how we judge machines, especially compared to humans. I decided to help reduce this gap in the literature by conducting dozens of experiments, not because those experiments tell us how we should judge or behave, but because they inform us about how we currently judge and behave.

The MIT Press: Were any of your findings particularly surprising to you?
César Hidalgo: A few. One of them was the finding that people judge human intentions using a bimodal distribution, assigning people plenty or little intention, but machines using a unimodal distribution. This implies that for accidental scenarios, people judge machine actions as more intentional, not because they assign a lot of intention to machines, but because they are willing to fully excuse humans—and not machines—in such situations.

The MIT Press: What are some of the most common misconceptions you hear about AI and machines?
César Hidalgo: The most common misconception is that people think of machines as mechanical rule followers. This definition, however, no longer applies to machines with the ability to learn. Faced with the same problem, and with similar inputs, machine learning algorithms can learn different strategies. This gives them a certain level of “agency” that other types of machines lack.

The MIT Press: Artificial intelligence is at the heart of many recent hot-button issues, including data privacy and the usage of autonomous vehicles. Why do you think AI is so controversial?
César Hidalgo: New technologies are always controversial, especially when they affect society. But controversies are also local. In the US, artificial intelligence is extremely controversial because it emerged at a time of little trust in institutions, and during a peak of racial and political polarization. So much of the controversy we see in the US about AI is actually about the US, not just about AI. Of course, the US holds enormous soft power, and influences attitudes in other places. But many of the issues that are very controversial here are less so in other parts of the world. That’s why we are working on research to extend these experiments to other geographies and cultures.

The MIT Press: Where in our society do you see machine learning and artificial intelligence having the biggest impact? Based on how humans judge machines, are there any areas of life where you would expect AI will not have much, if any, influence?
César Hidalgo: I don’t know if this is where AI will have the largest impact, but where I would like that impact to be is in collective intelligence. At the end of the day, AI is just another piece of the human puzzle. To me, the puzzle has always been how can we become collectively intelligent. AI, just like communication technologies, medicine, and transportation, can contribute to collective intelligence—or not. My hope is that we find ways to integrate AI in society not simply to speed up or automate processes, but to improve the intelligence of teams, organizations, and nations. If we succeed at that, AI will have succeeded.

The MIT Press: Overall, do you feel that humans judge machines fairly?
César Hidalgo: Our research shows clearly that humans judge people and machines differently. Is that fair? That’s hard to tell. A naive definition of fairness is to expect perfect equality, but that is not always the best option. Our research shows that people set a much higher bar for AI in scenarios involving physical harm or violence. Maybe that’s a good thing. But our research also shows that people do not give the same amount of credit to AIs when it helps improve things. This could be a harmful difference, since we may be missing opportunities to improve our society because of algorithm aversion. Overall, I don’t expect people to judge humans and machines equally. But what are the right differences and how large should they be? That is a tough normative question.

The MIT Press: What do you hope readers and researchers take away from How Humans Judge Machines?
César Hidalgo: Nuance. This is a book that explores a very simple question: How do humans judge machines compared to humans? The fact that we had little data about this until now tells us that we are at the beginning of a long research journey. This book provides nearly 100 scenarios on AI ethics, as well as a statistical study of some of the demographic and psychological factors that explain differences in the way in which people judge humans and machines. The book serves as an introduction to many topics, but also, as a reference book with data on dozens of scenarios with wide applicability.


About the author

About the book