Emerging Technologies

Why we need to rethink the role robots have in society

A man shakes hands with a robotic prosthetic hand in the Intel booth at the International Consumer Electronics show (CES) in Las Vegas, Nevada January 6, 2015.   REUTERS/Rick Wilking (UNITED STATES - Tags: BUSINESS SCIENCE TECHNOLOGY) - RTR4KAY2

What happens when machines become sentient? Image: REUTERS/Rick Wilking

Andrew Murray
Professor of Law, London School of Economics

For most of us, our understanding of robots and artificial intelligence (AI) is drawn more from science fiction than from fact. Intelligent robots are often portrayed as either a virulent threat to humanity, as seen in the Terminator series of films or in Isaac Asimov’s I, Robot, or a socially beneficial tool, as with the Star Wars robots R2-D2 and C-3PO or Star Trek’s Lieutenant-Commander Data. The truth of AI and robotic integration into society, however, is unlikely to be either of these, and, as developments in AI continue, perhaps we should pause to re-evaluate how we view these ever-evolving machines. (Note from the editor: Please see video below with reference to HAL 9000, the sentient computer in Stanley Kubrick’s movie 2001: A Space Odyssey.)

As we move towards robots becoming sentient, it is clear that we must start to rethink what robots mean to society and what their role is to be.

As a first step, we need to stop thinking of robots as human facsimiles. Science fiction tends to imagine robots that mimic human movement and language; while it is true that we are developing robots like these, the bulk of everyday robots will in all likelihood not look or sound human. Many will be specialised devices not dissimilar to the production line robots of today, carrying out spot-welds on cars or packing shirts for shipping; or they will exist without corporeal bodies at all as mere lines of code that control self-driving cars or drones or that will act as future personal assistants replacing Siri, Cortana and Alexa.

Stephen Hawking and Elon Musk have warned of the threat that AI poses to human safety and security

As we move towards robots becoming sentient, it is clear that we must start to rethink what robots mean to society and what their role is to be. Today much debate surrounds what I label the “sci-fi debate”. Among others, Professor Stephen Hawking and entrepreneur Elon Musk have warned of the threat that robots and AI pose to human safety and security, a position held by 36 per cent of people in the UK according to a 2015 YouGov survey for the British Science Association. In the alternative, the passive or socially useful robot has become demonised as a direct threat to human employability. The Bank of England warned in 2015 that up to 15 million jobs in Britain are at risk of being lost to robots, while a 2016 report from Forrester research suggested that, by 2021, robots will have eliminated six per cent of all jobs in the US.

Despite these dire warnings, we continue to press ahead in robotics and AI research. Why? Because there is a dissonance between the sci-fi debate and the future role of robots in our society. The first generation of truly smart AI devices is likely to be self-driving vehicles, which offer potentially massive social benefits. From a public safety perspective these benefits are clear. In 2016, 1,810 people were killed on Britain’s roads and 25,160 were seriously injured. With human error being attributed to around 90 per cent of road traffic accidents, self-driving cars could save around 1,600 lives and reduce serious injuries by around 22,500 per annum. Then there are the economic benefits for major corporations. Delivery companies, ride-share apps and even public transport providers can replace employees with smart robots, saving billions per annum and removing the risk of industrial action. Against such a backdrop it is clear to see how AI is attractive. Similar arguments can be made for the objective impartiality of AI judges and the precision of robot surgeons.

Image: PwC

These arguments and debates are not the root of my interest in AI and robotics, however. While most people are looking at the challenge of AI and robotics to society, I’m looking at the challenge of the robots to us. What is the human cost of integrating AI and robotics into society? It is clear that using intelligent devices, even the base algorithmic intelligence of a current smart agent like Alexa, changes the way that humans think and make decisions. We retain less information and outsource the storage of data to our devices. This means that these external devices filter the information provided to us when we make a decision: we lose some of our autonomy by trading it for convenience and for a perceived “fuller picture” which is not the case.

As Eli Pariser has shown in his book The Filter Bubble, a vital role of technology is choosing what not to reveal to us. In 1987 we might have made a decision based on incomplete information but the question of what to retain and what to discard was a purely human decision. Thirty years later we have more information but that information is valued and presented to us not by a human thought process but by algorithmic design. The information society we value so highly has created too much information for us to process. We are faced with a tyranny of choice created by overwhelming data and have outsourced the filtering of that data to algorithms and devices. This has led to developments like big data analytics and algorithmic regulation.

Have you read?

As we approach the brave new world of human-level machine intelligence, which some commentators believe could be with us by 2030, we will, however, be asked some very deep questions about our identity and what it means to be human. The first significant challenge is likely to be how we treat our new equals. A common theme of sci-fi is human inability to recognise and treat with respect sentient life forms different from our own. If we do achieve human-level artificial intelligence within the next 20-30 years, what we do next will define both us as humanity and our relationship with our creation. Will we treat it with respect, as an equal, or will we treat it as a tool?

When the machine intelligence reaches human level and becomes sentient and self-aware we will have to consider it an intelligent life form.

Today when we talk of AI and robotics we normally define them as tools or devices to be used as we please: to drive cars or fly planes, to mine in dangerous environments, or simply to manage our everyday lives. This may be acceptable with the current standard of low-level machine intelligence; however, when the machine intelligence reaches human level and becomes sentient and self-aware we will have to consider it an intelligent life form. If we then continue to treat it as a tool or device it will be no different from treating humans in this way. The UK abolished human slavery in 1833; in less than 30 years we may be revisiting the debate. Such debates, or even the possibility of such debates, mean that for lawyers AI and robotics offer a unique opportunity to hold a mirror up to humanity and society and to examine how we make and uphold our most fundamental legal principles and norms.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Values

Related topics:
Emerging TechnologiesCivil Society
Share:
The Big Picture
Explore and monitor how Values is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

5 ways to achieve effective cyber resilience

Filipe Beato and Jamie Saunders

November 21, 2024

Why AI is Southeast Asia's new engine for profitable growth

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum