Fourth Industrial Revolution

What if: robots go to war?

Image: A Royal Marine poses for photographers with the Unmanned Vehicle Robot, Testudo EUTERS/Luke MacGregor

Keith Breene
Senior Writer, Forum Agenda

For years a science fiction staple, robot combatants are moving into the realm of reality. While drones and other unmanned weapons are already being widely deployed, they are controlled remotely by humans. Lethal autonomous weapons (LAW) are robots that can select, attack and destroy targets without human intervention.

With remarkable advances in artificial intelligence some experts believe we are years rather than decades away from having this capability. So significant is this development that it has been called the third revolution in warfare, after gunpowder and nuclear arms.

Loading...

So what would it mean if we can replace soldiers with autonomous weapon systems?

International humanitarian law — which governs attacks on humans in times of war — has no specific provisions for such autonomy, but may still be applicable. The 1949 Geneva Convention on humane conduct in war requires any attack to satisfy three criteria: military necessity; discrimination between combatants and non-combatants; and proportionality between the value of the military objective and the potential for collateral damage.

Can a machine ever reliably make such apparently subjective decisions? And would we want to allow them to even if they could?

Opponents say machines should never have the power to make life and death decisions. Others argue that as the technology improves, it will eventually reach a point where machines are better at avoiding civilian casualties than human soldiers.

What do decision-makers need to know about autonomous weapons in order to decide on an international standard for how they could, and should, be deployed?

Vote in the poll above and continue the conversation on Thursday, January 21st at 09:00 EST / 15:00 CET. Tune in for a livestreamed discussion on the possible, plausible and probable impacts of artificial intelligence on defence systems at the World Economic Forum Annual Meeting in Davos.

The session was developed in partnership with TIME.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Fourth Industrial Revolution

Share:
The Big Picture
Explore and monitor how Fourth Industrial Revolution is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

We asked 5 tech strategy leaders about inclusive, ethical and responsible use of technology. Here's what they said

Daniel Dobrygowski and Bart Valkhof

November 21, 2024

Why is human-first design essential to the future of the internet?

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum