Emerging Technologies

Can we trust robots to make ethical decisions?

Humanoid robot of British company RoboThespian "blushes" during the opening ceremony of the Hanover technology fair Cebit March 9, 2014, where Britain is this year's partner country.     REUTERS/Wolfgang Rattay     (GERMANY - Tags: BUSINESS SCIENCE TECHNOLOGY

As the technology becomes more advanced, we may be heading for disaster Image: REUTERS/Wolfgang Rattay

Alex Gray
Senior Writer, Forum Agenda

Once the preserve of science-fiction movies, artificial intelligence is one of the hottest areas of research right now.

While the idea behind AI is to make our lives easier, there is concern that as the technology becomes more advanced, we may be heading for disaster.

How can we be sure, for instance, that artificially intelligent robots will make ethical choices? There are plenty of instances of artificial intelligence gone wrong. Here are five real-life examples:

1. The case of the rude and racist chatbot

Chatbot Tay, Microsoft’s AI millennial chatbot, was meant to be a friendly chatbot that would sound like a teenage girl and engage in light conversation with her followers on Twitter. However, within 24 hours she had been taken off the site because of her racist, sexist and anti-Semitic comments.

It was, said Microsoft, “a machine learning project, designed for human engagement. It is as much a social and cultural experiment as it is technical.”

Image: Twitter
2. Self-driving cars having to make ethical decisions

How can self-driving cars be programmed to make an ethical choice when it comes to an unavoidable collision? Humans would seriously struggle when deciding whether to slam into a wall and kill all passengers, or hitting pedestrians to save those passengers. So how can we expect a robot to make that split-second decision?

Loading...
3. Robots showing human biases

Less physically harmful, but just as worrying, are robots that learn racist behaviour. When robots were asked to judge a beauty competition, they overwhelmingly chose white winners. That’s despite the fact that, while the majority of contestants were white, many people of colour submitted photos to the competition, including large numbers from India and Africa.

4. Image tagging gone wrong

In a similar case, image tagging software developed by Google and Flickr suffered many disturbing mishaps, such as labelling a pair of black people gorillas and calling a concentration camp a “jungle gym”. Google said sorry and admitted it was a work in progress: “Lots of work being done and lots is still to be done, but we’re very much on it.”

5. A cleaning robot that breaks things

One paper recently looked at how artificial intelligence can go wrong in unexpected ways. For instance, what happens if a robot, whose job it is to clean up mess, decides to knock over a vase, rather than going round it, because it can clean faster by doing so?

But it’s not the robot’s fault

Robots don’t always get it wrong. In one instance, people were asked to guess the ethnicity of a group of Asian faces, and specifically to tell the difference between Chinese, Japanese and Korean faces. They got it right about 39% of the time. The robot did so 75% of the time.

When things do go wrong, one explanation is the fact that algorithms, the computer coding that powers the decision-making, is written by humans, and is therefore subject to all the inherent biases that we have. Another reason, and one given for the beauty contest case, is that an algorithm can only work with the data it’s got. In this instance, it had more white faces to look at than any other and based its results on that.

While researchers continue to look at ways to make artificial intelligence as safe as it can be, they are also working on a kill switch, so that if the worst-case scenario, a human can take over.

Have you read?
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Emerging Technologies

Related topics:
Emerging TechnologiesFourth Industrial Revolution
Share:
The Big Picture
Explore and monitor how Fourth Industrial Revolution is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

5 ways to achieve effective cyber resilience

Filipe Beato and Jamie Saunders

November 21, 2024

Why AI is Southeast Asia's new engine for profitable growth

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum