Emerging Technologies

Here's why we struggle to trust robots after a few mistakes

A robot

Humans are becoming less forgiving of robots after multiple mistakes Image: PEXELS / Alex Knight

Jared Wadley-Michigan
Writer, Futurity
  • Similar to human coworkers, robots can make mistakes that violate a human’s trust.
  • A new study looks at four strategies that might repair the damage from this loss of trust – apologies, denials, explanations and promises of trustworthiness.
  • However, the results showed that after three mistakes, none of these strategies are effective in fully repairing the human's trust in the robot.
  • This means that robots are not given the opportunity to learn from their mistakes, meaning humans could be losing out on potential benefits, the researchers say.

Humans are less forgiving of robots after they make multiple mistakes—and the trust is difficult to get back, according to a new study.

Similar to human coworkers, robots can make mistakes that violate a human’s trust in them. When mistakes happen, humans often see robots as less trustworthy, which ultimately decreases their trust in them.

The study examines four strategies that might repair and mitigate the negative impacts of these trust violations. These trust strategies are: apologies, denials, explanations, and promises of trustworthiness.

The researchers conducted an experiment where 240 participants worked with a robot coworker to accomplish a task, which sometimes involved the robot making mistakes. The robot violated the participant’s trust and then provided a particular repair strategy.

Results indicated that after three mistakes, none of the repair strategies ever fully repaired trustworthiness.

“By the third violation, strategies used by the robot to fully repair the mistrust never materialized,” says Connor Esterwood, a researcher at the University of Michigan School of Information and lead author of the study in Computers in Human Behavior.

Esterwood and coauthor Lionel Robert, professor of information, also note that this research introduces theories of forgiving, forgetting, informing, and misinforming.

The study results have two implications. Esterwood says researchers must develop more effective repair strategies to help robots better repair trust after these mistakes. Also, robots need to be sure that they have mastered a new task before attempting to repair a human’s trust in them.

“If not, they risk losing a human’s trust in them in a way that cannot be recovered,” Esterwood says.

Loading...

What do the findings mean for human-human trust repair? Trust is never fully repaired by apologies, denials, explanations, or promises, the researchers say.

“Our study’s results indicate that after three violations and repairs, trust cannot be fully restored, thus supporting the adage ‘three strikes and you’re out,'” Robert says. “In doing so, it presents a possible limit that may exist regarding when trust can be fully restored.”

Even when a robot can do better after making a mistake and adapting after that mistake, it may not be given the opportunity to do better, Esterwood says. Thus, the benefits of robots are lost.

Lionel notes that people may attempt to work around or bypass the robot, reducing their performance. This could lead to performance problems which in turn could lead to them being fired for lack of either performance and/or compliance, he says.

Discover

How is the World Economic Forum ensuring the responsible use of technology?

Have you read?
Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Artificial Intelligence

Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Billions of dollars have been invested in healthcare AI. But are we spending in the right places?

Jennifer Goldsack and Shauna Overgaard

November 14, 2024

Explainer: What is digital trust in the intelligent age?

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum