How do you design interactions between people and machines when the machines are now doing tasks that humans used to do? What do people expect from these machines, and how do people react to their design?
Anthropomorphism And Trust
Adam Waytz, Joy Heafner, and Nicholas Epley (2014) wanted to know if giving machines more human-like qualities would increase the amount of trust that people had in the machine. They tested how anthropomorphism would affect trust. They define anthropomorphism as:
a process of inductive inference whereby people attribute to nonhumans distinctively human characteristics, particularly the capacity for rational thought (agency) and conscious feeling (experience)
The researchers’ idea was that if a machine were seen to be more human, then it would be seen to be more thoughtful, more mindful. They hypothesized that people trust people who are more thoughtful, and so they would trust machines that seem more thoughtful, too. Thoughtfulness is something that people attribute to other people. If people think an autonomous car or a robot reading x-rays is just a “mindless” machine, they won’t trust it as much. Conversely, if the machine seems to be “thinking” more like a human, then people will think the machine will be better able to control its own actions— it’s being mindful, not mindless.
In their experiment, the researchers used a driving simulator and engineered the simulation so that there was an accident in which participants were struck by an oncoming car. The simulation made it obvious that the accident was caused by a human driver in the other car.
Participants were assigned to either a normal, agentic, or anthropomorphic condition:
- In the normal condition, the participants were driving, with no automatic features from the car.
- In the agentic condition, the participants drove an autonomous car. The car controlled its own steering and speed. Participants in this condition were told what was going to happen, and how and when to use the autonomous features.
- In the anthropomorphic condition, the participants drove the same autonomous car, but in addition to being told what was going to happen and how to use the autonomous features, the experimenter referred to the car with the name Iris, and referred to the car as a “she.” A human voice was also attributed to the car. The voice spoke at certain times during the simulation, and gave the instructions.
Participants in the agentic and anthropomorphic conditions first drove on a practice course to try out the autonomous features. Then everyone in all the conditions drove the course.
Participants in the anthropomorphic condition blamed their car less for the accident than did those in the normal or agentic conditions. Participants in the anthropomorphic group rated the car as having more human-like mental capacities than people in the agentic group. They trusted their car more, and showed a more relaxed heart rate when the “accident” occurred.
Beware Of The Uncanny Valley
Some humanizing of a machine makes people more willing to trust it, but how far does that go?
Anthropomorphizing entails acting like a human, but not necessarily looking like one. People who design robots have to be careful about what’s called “the uncanny valley.”
The uncanny valley is the idea that as things, particularly robots and animated characters, become more realistic, they eventually hit a point where people find them creepy and nonhuman. This is due to small inconsistencies, for example, the skin texture or the reflection in the eyes may seem a bit off. People unconsciously notice these things because these are attributes that they observe daily in interactions with others.
The uncanny valley theory originated from Masahiro Mori, while working with robotics in the 1970s. An article of his from the 1970s was recently translated into English (2012).
Mori’s theory was that people’s reactions to robots range from lack of connection to comfort and connection to alienation, depending on how lifelike the robot is. If the robot is a little bit like a person, then people will feel empathy and connection. But if it becomes very human-like without getting past the “not quite human” feeling, then people’s reaction turns to revulsion. Figure 53.1 shows a graph of the relationship between people’s comfort level with the robot or machine compared to the degree of human-likeness. The place where the comfort level dips dramatically is the uncanny valley.

FIGURE 53.1 The uncanny valley.
Research by Christine Looser (2010) shows that it is the deadness of the eyes that makes people feel that the robot is not human and that it is creepy.
The uncanny valley exists for robots, machines, and animated characters.
Takeaways
- When you design an interface for a machine that’s doing tasks that humans usually do, build in some human-like (anthropomorphic) characteristics.
- Don’t design a machine or animation that looks and acts exactly like a human unless you can take it all the way.
Leave a Reply