Emerging Technologies

Killer robots, free will and the illusion of control

A Royal Marine poses for photographers with the Unmanned Vehicle Robot, Testudo, at the launch of the Defence Technology Plan in London February 26, 2009. The media were invited to the launch of the Defence Technology Plan. This is the first time the Ministry of Defence has publicly revealed its research needs.    REUTERS/Luke MacGregor (BRITAIN)

As artificial intelligence develops, the questions around ethics will change and we need to prepare. Image: REUTERS/Luke MacGregor

Simon Watson

Control. We all like to think we have it, but is it all just an illusion? It might seem like a very existential question but it plays an important part in our acceptance of new technologies, especially when it comes to robots.

Even if we personally aren’t in control of something, we often like to comfort ourselves that someone else is. Humans have free will and empathy and we trust that we will make the “right” choice should something bad happen.

The problem is, humans are unpredictable. There are 7.6 billion people on the planet. Each person has their own moral and ethical code, a lifetime of different experiences that shape their actions and each one has a unique psychological make up which will direct how they react to a stressful event.

Image: World Economic Forum

When you take your driving theory test, you aren’t asked about the Trolley problem. This is a famous thought experiment which can be adapted as follows: you’re driving a car and a child runs out in front of you, too close to stop before you hit them. If you swerve left to avoid them, you’ll hit pedestrians and if you swerve right, you’ll hit oncoming traffic killing yourself and the driver of the other car. What do you do?

The first step is to brake as hard as you can. You then have to make a choice about which of the three options you take. There is no correct answer but you will make a decision subconsciously about which one to take. You might take into account a whole host of conditions to justify your decision. Of course it’s unlikely you’ll ever fully acknowledge which factors you took into account. Everyone will agree that you were put in an impossible position.

Impossible decisions and accountability

But now change the scenario slightly. You’re in an autonomous car and the same incident occurs. The car brakes as hard as it can, but who has decided which of the three options it will take? Unlike you, that decision and the sensory inputs used to make it can be traced back. There could be a much higher level of accountability. These types of scenarios could be simulated thousands of times and the outcomes predicted and verified. A human will have programmed the outcomes into it. But it won’t be one person who does this – a consensus across the scientific, political and social spectrum will have been built.

History has shown that when human control is replaced with automated control, safety increases and the number of accidents decrease – whether that’s elevators in the early 20th century or aircraft in late 20th century. Automated transportation is likely to have the same benefits.

Killer robots

But there is one area where fear of ceding control could have profound effects, serious enough that the UN has started to debate the topic – killer robots. While automation in almost every other walk of life has had its detractors, in the end reasoned discussion has been had in public forums to highlight the potential benefits. But not since the Luddites of the industrial revolution has a topic engendered such emotion and sensationalism.

Some even argue that by restricting AI research in this field, we can prevent a dystopian future, like that predicted in the sensationalist video “Slaughterbots”. War and violence are emotive topics, but it is naive to think that humankind will ever be rid of them. The arguments on both sides are very complex when it comes to the use of robots in war but ultimately it all comes down to control.

The biggest fear about autonomous weaponry is that anyone could do it. All you would need is a low cost robot (perhaps even a toy drone), a camera and some code from the internet and you have an autonomous killing robot. It doesn’t even have to have a weapon attached to it – a quadcopter with a rock strapped to it, dropping out of the sky onto your head will kill you just as easily as an explosive. Strap a metal bar to one and fly it into an aircraft engine and you could bring a plane down.

A killer robot will need image processing, facial recognition and geo-location capabilities. These are already embedded in almost every aspect of our lives. You can use facial recognition to unlock your laptop and everyone has sat-nav on their phones, so stopping research on it is simply not feasible. Mobile robot use is becoming more prevalent in everyday life so it’s unlikely their advance will be stopped.

Have you read?

Some people even fear these robots will eventually decide who to kill themselves. But society is a very long way from that reality. Yes, it might make killing someone easier than hiring a professional assassin but you don’t need advanced technology to kill someone. Nobody has called for a ban on cars following their use as weapons of terror over the last year because the benefits far outweigh the risks.

So as you walk down the street, ask yourself, are you really in control of your life? Haven’t we already relinquished control? Don’t we do it every time we get in a taxi or a plane or a bus? The illusion of control is in every aspect the world. But if another human appears to be in control, people seem happy to give them the benefit of the doubt.

The reality is that machines and are much more reliable and accountable than humans. Maybe it’s time for society to see through the illusion and make the practical decision – relinquish control and let the machines and the robots do what they’re designed to do.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Behavioural Sciences

Related topics:
Emerging TechnologiesFourth Industrial Revolution
Share:
The Big Picture
Explore and monitor how Behavioural Sciences is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Closing the AI equity gap: Trust and safety for sustainable development

Keyzom Ngodup Massally and Jennifer Louie

December 3, 2024

Why we're heading back to the Moon - and on to Mars

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum