Here's why we struggle to trust robots after a few mistakes
Humans are becoming less forgiving of robots after multiple mistakes Image: PEXELS / Alex Knight
- Similar to human coworkers, robots can make mistakes that violate a human’s trust.
- A new study looks at four strategies that might repair the damage from this loss of trust – apologies, denials, explanations and promises of trustworthiness.
- However, the results showed that after three mistakes, none of these strategies are effective in fully repairing the human's trust in the robot.
- This means that robots are not given the opportunity to learn from their mistakes, meaning humans could be losing out on potential benefits, the researchers say.
Humans are less forgiving of robots after they make multiple mistakes—and the trust is difficult to get back, according to a new study.
Similar to human coworkers, robots can make mistakes that violate a human’s trust in them. When mistakes happen, humans often see robots as less trustworthy, which ultimately decreases their trust in them.
The study examines four strategies that might repair and mitigate the negative impacts of these trust violations. These trust strategies are: apologies, denials, explanations, and promises of trustworthiness.
The researchers conducted an experiment where 240 participants worked with a robot coworker to accomplish a task, which sometimes involved the robot making mistakes. The robot violated the participant’s trust and then provided a particular repair strategy.
Results indicated that after three mistakes, none of the repair strategies ever fully repaired trustworthiness.
“By the third violation, strategies used by the robot to fully repair the mistrust never materialized,” says Connor Esterwood, a researcher at the University of Michigan School of Information and lead author of the study in Computers in Human Behavior.
Esterwood and coauthor Lionel Robert, professor of information, also note that this research introduces theories of forgiving, forgetting, informing, and misinforming.
The study results have two implications. Esterwood says researchers must develop more effective repair strategies to help robots better repair trust after these mistakes. Also, robots need to be sure that they have mastered a new task before attempting to repair a human’s trust in them.
“If not, they risk losing a human’s trust in them in a way that cannot be recovered,” Esterwood says.
What do the findings mean for human-human trust repair? Trust is never fully repaired by apologies, denials, explanations, or promises, the researchers say.
“Our study’s results indicate that after three violations and repairs, trust cannot be fully restored, thus supporting the adage ‘three strikes and you’re out,'” Robert says. “In doing so, it presents a possible limit that may exist regarding when trust can be fully restored.”
Even when a robot can do better after making a mistake and adapting after that mistake, it may not be given the opportunity to do better, Esterwood says. Thus, the benefits of robots are lost.
Lionel notes that people may attempt to work around or bypass the robot, reducing their performance. This could lead to performance problems which in turn could lead to them being fired for lack of either performance and/or compliance, he says.
How is the World Economic Forum ensuring the responsible use of technology?
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Emerging TechnologiesSee all
David Elliott
November 25, 2024