These tech leaders have signed a pledge against killer robots
Campaigning against LAWS - lethal autonomous weapons systems. Image: REUTERS/Charles Platiau
Technology industry leaders, backed by some of the world’s biggest science and industry organisations, have signed global pledge against the development of autonomous weapons systems using artificial intelligence (AI).
Engineers and scientists from the technology industry say they will “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons”.
Lethal autonomous weapons systems (LAWS), also called killer robots, are weapons that can identify, target, and kill a person, without a human making or authorising such decisions.
The pledge, released in Stockholm at the 2018 International Joint Conference on Artificial Intelligence (IJCAI), the world’s leading AI research meeting with over 5,000 attendees, was signed by 150 companies and more than 2,400 individuals from 90 countries working in artificial intelligence (AI) and robotics.
Corporate signatories include the Google DeepMind, the XPRIZE Foundation, University College London, ClearPath Robotics/OTTO Motors, the European Association for AI, and the Swedish AI Society.
Individuals include head of research at Google.ai Jeff Dean, entrepreneur Elon Musk, AI pioneers Stuart Russell, Yoshua Bengio, Anca Draga, plus British Labour MP Alex Sobel, and Toby Walsh from the University of NSW.
Ethics
Walsh, a professor of artificial intelligence at UNSW in Sydney, points out the ethical issues.
“We cannot hand over the decision as to who lives and who dies to machines,” he says.
“They do not have the ethics to do so. I encourage you and your organizations to pledge to ensure that war does not become more terrible in this way.”
The pledge, organised by the Future of Life Institute, challenges governments, academia and industry to follow their lead:
“We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. … We ask that technology companies and organisations, as well as leaders, policymakers, and other individuals, join us in this pledge.”
Max Tegmark, a physics professor at the Massachusetts Institute of Technology and president of the Future of Life Institute, announced the pledge.
“I’m excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect,” Tegmark said.
“AI has huge potential to help the world – if we stigmatise and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilising as bioweapons, and should be dealt with in the same way.”
The pledge begins with the statement: “Artificial intelligence is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.”
A clear and present danger
Ryan Gariepy, Founder and CTO of both Clearpath Robotics and OTTO Motors, a strong opponent of lethal autonomous weapons, said: “Clearpath continues to believe that the proliferation of lethal autonomous weapon systems remains a clear and present danger to the citizens of every country in the world. No nation will be safe, no matter how powerful.
“Clearpath’s concerns are shared by a wide variety of other key autonomous systems companies and developers, and we hope that governments around the world decide to invest their time and effort into autonomous systems which make their populations healthier, safer, and more productive instead of systems whose sole use is the deployment of lethal force.”
Advocates of an international ban on LAWS are concerned that the robots will be difficult to control — easier to hack, more likely to end up on the black market, and easier for terrorists and despots to obtain — which could become destabilising for all countries.
In December 2016, the United Nations’ Review Conference of the Convention on Conventional Weapons (CCW) began formal discussion on LAWS. 26 countries attending the Conference have so far announced support for some type of ban, including China.
The next UN meeting on LAWS will be held next month.
The full text of the pledge
Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.
In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable.
There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual. Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems.
Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage. Stigmatizing and preventing such an arms race should be a high priority for national and global security.
We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons. We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Related topics:
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Emerging TechnologiesSee all
Michele Mosca and Donna Dodson
December 20, 2024