Resilience, Peace and Security

Banning autonomous weapons is not the answer

A robot is pictured in front of the Houses of Parliament and Westminster Abbey as part of the Campaign to Stop Killer Robots in London April 23, 2013. Robots with the ability to attack targets without any human intervention must be banned before they are even developed for use in a battlefield, campaigners urged on Tuesday. REUTERS/Luke MacGregor  (BRITAIN - Tags: SCIENCE TECHNOLOGY MILITARY CONFLICT POLITICS CIVIL UNREST) - LM1E94N172P01

A robot distributes promotional literature calling for a ban on fully autonomous weapons in Parliament Square, London Image: REUTERS/Luke MacGregor

This article is part of the World Economic Forum's Geostrategy platform

Over-hyped, catastrophic visions of the imminent dangers from ‘killer robots’ stem from a genuine need to consider the ethical and legal implications of developments in artificial intelligence and robotics when they are applied in a military context.

Despite this, international agreement on the best way to address these challenges is hampered, partly by the inability to even converge on a definition of the capabilities which cause concern.

One of the biggest issues is ensuring that human operators retain control over the application of lethal force, but even taking this specific-sounding approach means that a number of ways that control could be lost are grouped together.

A machine gun triggered with a heat sensor would be a misuse of simple automation. Systems which overload human operators with information and don’t give them sufficient time to make decisions could be considered technically impossible to control. The human crew responsible for deciding the targets of a remotely piloted aircraft might feel sufficiently disassociated from their decisions to lose effective control of them.

These and other possibilities were considered at the UN’s Group of Government Experts (GGE) meeting last week discussing lethal autonomous weapon systems. The forum is the right place to address the implications of new technology, but it clearly needs to be more specific about the risks it is considering.

Calls for a ban

Automated systems, which respond to operator inputs with predictable actions, have existed for a long time. But as automation gets more sophisticated, concerns – and calls for a prohibition on the development of lethal autonomous systems – are growing, including from states which do not have the intent or internal capability to develop highly automated military equipment.

These concerns have already come to the attention of many states who independently undertake efforts to identify and mitigate the risks posed by new technologies and ensure they are used in compliance with international humanitarian law, and also in accordance with national policies which are often more exacting.

In states which develop weapons responsibly, a huge range of control measures on weapons procurement and use exist. These aim to culminate in ensuring military commanders and operators are competent, informed and that their control over the use of force is enhanced, not undermined, by the tools they use.

Positive benefits

Some have pointed out that well-designed automated systems used under effective human control can enhance adherence to principles of international humanitarian law. These positive benefits are part of the reason why a prohibition is not the right answer.

Developments in AI mean it is feasible to imagine an unpredictable system which gives ‘better’ outcomes than human decision-making. A machine like this would not actually be making decisions in the same way humans do, however strong our tendency to anthropomorphize them might be. They have no moral agency.

Without such a moral agent it is not possible to ensure compliance with international humanitarian law. The principle of proportionality, for example, requires assessments of whether harm and loss to civilians is excessive in relation to anticipated military advantage. However sophisticated new machines may be, that is beyond their scope; so international humanitarian law already effectively prohibits the use of any such systems. If the legal argument is compelling, the tactical one is even more so – common sense alone precludes allowing automated execution of unpredictable outcomes when applying lethal force.

In the complex and intrinsically human arena of warfare, it is the idea of deliberately ceding responsibility to a machine at all that is unrealistic

The UK comes under fire in the report of the House of Lords Select Committee on AI published last Monday for maintaining a definition of autonomy describing a system capable of independently understanding higher intent and orders – something that has been widely criticized for being unrealistic. In fact, in the complex and intrinsically human arena of warfare, it is the idea of deliberately ceding responsibility to a machine at all that is unrealistic. The UK’s definition represents the minimum standard at which this could be considered – it almost certainly does not now, and may well never, exist.

Thus, a prohibition on the development and use of lethal autonomous weapons systems is not the simple solution it appears to be.

This is not, however, where the discussion should end. Sophisticated automation does present new challenges to human control. The GGE is a forum that can help states surmount these challenges by sharing best practices and garnering input from experts, academics and civil society. It also gives an opportunity to consider the possible impacts raised by the scale and risk balance associated with automated weapons on the norms of war.

First and foremost, though, states must take responsibility for applying their already existing international legal and ethical obligations in the development and use of new weapons technology.

Have you read?

Banning Autonomous Weapons Is Not the Answer, Major Kathleen McKendrick, Chief of General Staff (British Army) Visiting Fellow, International Security Department, Chatham House

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

International Security

Related topics:
Resilience, Peace and SecurityEmerging Technologies
Share:
The Big Picture
Explore and monitor how International Security is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Humanitarian crises are growing. So how do we make crisis response more efficient?

Guiseppe Saba

November 7, 2024

How the Maldives can revive its economy through sustainable growth

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum