Can we trust robots to make ethical decisions?
As the technology becomes more advanced, we may be heading for disaster Image: REUTERS/Wolfgang Rattay
Once the preserve of science-fiction movies, artificial intelligence is one of the hottest areas of research right now.
While the idea behind AI is to make our lives easier, there is concern that as the technology becomes more advanced, we may be heading for disaster.
How can we be sure, for instance, that artificially intelligent robots will make ethical choices? There are plenty of instances of artificial intelligence gone wrong. Here are five real-life examples:
Chatbot Tay, Microsoft’s AI millennial chatbot, was meant to be a friendly chatbot that would sound like a teenage girl and engage in light conversation with her followers on Twitter. However, within 24 hours she had been taken off the site because of her racist, sexist and anti-Semitic comments.
It was, said Microsoft, “a machine learning project, designed for human engagement. It is as much a social and cultural experiment as it is technical.”
How can self-driving cars be programmed to make an ethical choice when it comes to an unavoidable collision? Humans would seriously struggle when deciding whether to slam into a wall and kill all passengers, or hitting pedestrians to save those passengers. So how can we expect a robot to make that split-second decision?
Less physically harmful, but just as worrying, are robots that learn racist behaviour. When robots were asked to judge a beauty competition, they overwhelmingly chose white winners. That’s despite the fact that, while the majority of contestants were white, many people of colour submitted photos to the competition, including large numbers from India and Africa.
In a similar case, image tagging software developed by Google and Flickr suffered many disturbing mishaps, such as labelling a pair of black people gorillas and calling a concentration camp a “jungle gym”. Google said sorry and admitted it was a work in progress: “Lots of work being done and lots is still to be done, but we’re very much on it.”
One paper recently looked at how artificial intelligence can go wrong in unexpected ways. For instance, what happens if a robot, whose job it is to clean up mess, decides to knock over a vase, rather than going round it, because it can clean faster by doing so?
Robots don’t always get it wrong. In one instance, people were asked to guess the ethnicity of a group of Asian faces, and specifically to tell the difference between Chinese, Japanese and Korean faces. They got it right about 39% of the time. The robot did so 75% of the time.
When things do go wrong, one explanation is the fact that algorithms, the computer coding that powers the decision-making, is written by humans, and is therefore subject to all the inherent biases that we have. Another reason, and one given for the beauty contest case, is that an algorithm can only work with the data it’s got. In this instance, it had more white faces to look at than any other and based its results on that.
While researchers continue to look at ways to make artificial intelligence as safe as it can be, they are also working on a kill switch, so that if the worst-case scenario, a human can take over.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Emerging Technologies
Related topics:
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Emerging TechnologiesSee all
Filipe Beato and Jamie Saunders
November 21, 2024