Fourth Industrial Revolution

Worried about killer robots? Reading children's stories can teach them right from wrong

Humanoid robot of British company RoboThespian "blushes" during the opening ceremony of the Hanover technology fair Cebit March 9, 2014, where Britain is this year's partner country.     REUTERS/Wolfgang Rattay     (GERMANY - Tags: BUSINESS SCIENCE TECHNOLOGY TELECOMS) - RTR3GC9I

Humanoid robot of British company RoboThespian "blushes". Image: REUTERS/Wolfgang Rattay

Ashley Rodriguez
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Fourth Industrial Revolution?
The Big Picture
Explore and monitor how Fourth Industrial Revolution is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Fourth Industrial Revolution

Elon Musk, Stephen Hawking, and Bill Gates have all warned that rapidly advancing artificial intelligence can have severe consequences for the human race. Their concerns bring credibility to decades-old fears, perpetuated by science-fiction books and films like Terminatorand I, Robot, that human creations will wipe out the world as we know it, if we lose control of them.

But researchers at the Georgia Institute of Technology believe there’s a way to make proliferating artificial intelligence more sympathetic to humanity, and therefore, less likely to kill us.

A recent paper by researchers Mark Riedl and Brent Harrison shows that fables and folktales can teach artificially-intelligent beings about human conventions of right and wrong, much like they teach basic morals to young children.

In the US, for example, the tale of a young George Washington confessing to chopping town a cherry tree teaches kids to always tell the truth. And the fable of the tortoise and the hare, read around the world, shows that “slow and steady wins the race.”

Fictional stories, the paper says, offer broadly-applicable roadmaps on how to act in different situations. They also illuminate the human thought process. There are hundreds of stories out there that illustrate the difference between good and bad behavior. By crowdsourcing and reading them, researchers believe robots can learn the right way to behave in an infinite number of scenarios, better than they would from programmed scenarios.

“The collected stories of different cultures teach children how to behave in socially-acceptable ways with examples of proper and improper behavior in fables, novels and other literature,” Riedl said in a statement. “We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won’t harm humans and still achieve the intended purpose.”

Robots can learn to conform to human norms, the paper argues, through a method called “Quixote,” which teaches artificial agents to read stories that demonstrate human values and then rewards them for “good” behavior.

The technique builds on existing research by Riedl, called theScheherazade system, that shows how artificial intelligence can put together an appropriate sequence of actions by crowdsourcing story plots from the internet. The Quixote method then uses a “reward signal” to reinforce good behavior and punish bad behavior during testing. The combined tactics teach the robots to find patterns in stories that can help them correctly decide between variables, like a choose-your-own-adventure story in which various paths lead to different outcomes.

In one test, researchers told a robot to pick up drugs from a pharmacy and return home. Without an understanding of human norms—that drugs cost money and stealing is wrong—the robot might opt to rob the pharmacy in order to complete the task as quickly as possible. But in at least one scenario, the robot learned through positive reinforcement of the correct value reading that it was okay to take the time to go to the bank and withdraw money to buy the drugs. In this case, the positive reinforcement for taking the correct path to purchase outweighed the negative reaction to taking longer to complete the task.

The paper argues that the technique works best with robots that have limited purposes and need to work with humans to achieve them, but added that it’s an important first step in imparting artificial intelligence with moral reasoning.

Of course, if robots can be taught “good” behavior, they could also learn “bad” behavior.

“It may not be possible to prevent all harm to human beings,” the paper said, “but we believe that an artificial intelligence that has been encultured—that is, has adopted the values implicit to a particular culture or society—will strive to avoid psychotic-appearing behavior except under the most extreme circumstances.”

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How governments can attract innovative manufacturing industries and promote 4IR technologies like AI

M.B. Patil and Alok Medikepura Anil

June 24, 2024

About Us

Events

Media

Partners & Members

  • Sign in
  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum