Emerging Technologies

Why empowering robots is the key to working with them

Alexander Dietrich of the German air and space agency, Deutsches Zentrum fuer Luft und Raumfahrt (DLR), shakes the hand of the humanoid two arm system robot Justin during a presentation in Oberpfaffenhofen near Munich June 1, 2011. The mobile robotic system Justin with its compliant controlled light weight arms and its two four finger hands allows the long range autonomous operation of the system. Sensors and cameras allow the 3D reconstruction of the robot's environment and therefore enable Justin to perform given tasks autonomously such as catching balls or serving coffee.   REUTERS/Michael Dalder(GERMANY - Tags: SOCIETY SCI TECH)

Scientists are developing a concept that will allow robots to keep their options open in order to protect us. Image: REUTERS/Michael Dalder

Conn Hastings

Scientists at the University of Hertfordshire in the UK have developed a concept called Empowerment to help robots to protect and serve humans, while keeping themselves safe.

Robots are becoming more common in our homes and workplaces and this looks set to continue. Many robots will have to interact with humans in unpredictable situations. For example, self-driving cars need to keep their occupants safe, while protecting the car from damage. Robots caring for the elderly will need to adapt to complex situations and respond to their owners’ needs.

Recently, thinkers such as Stephen Hawking have warned about the potential dangers of artificial intelligence, and this has sparked public discussion. “Public opinion seems to swing between enthusiasm for progress and downplaying any risks, to outright fear,” says Daniel Polani, a scientist involved in the research, which was recently published in Frontiers in Robotics and AI.

Image: FT

However, the concept of “intelligent” machines running amok and turning on their human creators is not new. In 1942, science fiction writer Isaac Asimov proposed his three laws of robotics, which govern how robots should interact with humans. Put simply, these laws state that a robot should not harm a human, or allow a human to be harmed. The laws also aim to ensure that robots obey orders from humans, and protect their own existence, as long as this doesn’t cause harm to a human.

The laws are well-intentioned, but they are open to misinterpretation, especially as robots don’t understand nuanced and ambiguous human language. In fact, Asimov’s stories are full of examples where robots misinterpreted the spirit of the laws, with tragic consequences.

One problem is that the concept of “harm” is complex, context-specific and is difficult to explain clearly to a robot. If a robot doesn’t understand “harm”, how can they avoid causing it? “We realized that we could use different perspectives to create ‘good’ robot behavior, broadly in keeping with Asimov’s laws,” says Christoph Salge, another scientist involved in the study.

The concept the team developed is called Empowerment. Rather than trying to make a machine understand complex ethical questions, it is based on robots always seeking to keep their options open. “Empowerment means being in a state where you have the greatest potential influence on the world you can perceive,” explains Salge. “So, for a simple robot, this might be getting safely back to its power station, and not getting stuck, which would limit its options for movement. For a more futuristic, human-like robot this would not just include movement, but could incorporate a variety of parameters, resulting in more human-like drives.”

Have you read?

The team mathematically coded the Empowerment concept, so that it can be adopted by a robot. While the researchers originally developed the Empowerment concept in 2005, in a recent key development, they expanded the concept so that the robot also seeks to maintain a human’s Empowerment. “We wanted the robot to see the world through the eyes of the human with which it interacts,” explains Polani. “Keeping the human safe consists of the robot acting to increase the human’s own Empowerment.”

“In a dangerous situation, the robot would try to keep the human alive and free from injury,” says Salge. “We don’t want to be oppressively protected by robots to minimize any chance of harm, we want to live in a world where robots maintain our Empowerment.”

This altruistic Empowerment concept could power robots that adhere to the spirit of Asimov’s three laws, from self-driving cars, to robot butlers. “Ultimately, I think that Empowerment might form an important part of the overall ethical behaviour of robots,” says Salge.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Behavioural Sciences

Share:
The Big Picture
Explore and monitor how Behavioural Sciences is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Billions of dollars have been invested in healthcare AI. But are we spending in the right places?

Jennifer Goldsack and Shauna Overgaard

November 14, 2024

Explainer: What is digital trust in the intelligent age?

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum