AI: how can we manage robot risk?
Artificial Intelligence (AI) is the discipline that studies how to create software and systems that behave intelligently. AI scientists build systems that can solve reasoning tasks, learn from data, make decisions and plans, play games, perceive their environments, move autonomously, manipulate objects, respond to queries expressed in human languages, translate between languages, and more.
AI has captured the public imagination for decades, especially in the form of anthropomorphized robots, and recent advances have pushed AI into popular awareness and use: IBM’s “Watson” computer beat the best human Jeopardy! players; statistical approaches have significantly improved Google’s automatic translation services and digital personal assistants such as Apple’s Siri; semi-autonomous drones monitor and strike military targets around the world; and Google’s self-driving car has driven hundreds of thousands of miles on public roads.
This represents substantial progress since the 1950s, and yet the original dream of a machine that could substitute for arbitrary human labour remains elusive. One important lesson has been that, as Hans Moravec wrote in the 1980s, “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility”.
These and other challenges to AI progress are by now well known within the field, but a recent survey shows that the most-cited living AI scientists still expect human-level AI to be produced in the latter half of this century, if not sooner, followed (in a few years or decades) by substantially smarter-than-human AI. If they are right, such an advance would likely transform nearly every sector of human activity.
If this technological transition is handled well, it could lead to enormously higher productivity and standards of living. On the other hand, if the transition is mishandled, the consequences could be catastrophic. How might the transition be mishandled?
Contrary to public perception and Hollywood screenplays, it does not seem likely that advanced AI will suddenly become conscious and malicious. Instead, the core problem is one of aligning AI goals with human goals. If smarter-than-human AIs are built with goal specifications that subtly differ from what their inventors intended, it is not clear that it will be possible to stop those AIs from using all available resources to pursue those goals, any more than chimpanzees can stop humans from doing what they want.
In the nearer term, however, numerous other social challenges need to be addressed. In the next few decades, AI is anticipated to partially or fully substitute for human labour in many occupations, and it is not clear whether human workers can be retrained quickly enough to maintain high levels of employment.
What is more, while previous waves of technology have also created new kinds of jobs, this time structural unemployment may be permanent as AI could be better than humans at performing the new jobs it creates. This may require a complete restructuring of the economy by raising fundamental questions of the nature of economic transactions and what it is that humans can do for each other.
Autonomous vehicles and other cases of human-robot interaction demand legal solutions fit for the novel combination of automatic decision-making with a capacity for physical harm. Autonomous vehicles will encounter situations where they must weigh the risks of injury to passengers against the risks to pedestrians; what will the legal redress be for parties who believe the vehicle decided wrongly?
Several nations are working towards the development of lethal autonomous weapons systems that can assess information, choose targets and open fire without human intervention. Such developments raise new challenges for international law and the protection of non-combatants. Who will be accountable if they violate international law? The Geneva Conventions are unclear. It is also not clear when human intervention occurs: before deployment, during deployment? Humans will be involved in programming autonomous weapons; the question is whether human control of the weapon ceases at the moment of deployment.
AI in finance and other domains has introduced risks associated with the fact that AI programmes can make millions of economically significant decisions before a human can notice and react, leading for example to a May 2012 trading event that nearly bankrupted Knight Capital.
In short, proactive and future-oriented work in many fields is needed to counteract what Richard Posner describes as “the tendency of technological advance to outpace the social control of technology”.
Authors: Stuart Russel is Professor of Computer Science and Smith-Zadeh Professor in Engineering at University of California, Berkeley. Bernhard Petermeier, Senior Community Manager, Technology Pioneers, World Economic Forum.
Image: Children touch the hands of the humanoid robot Roboy at the exhibition Robots on Tour in Zurich, March 9, 2013. REUTERS/Michael Buholzer
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Emerging Technologies
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Emerging TechnologiesSee all
Daniel Dobrygowski and Bart Valkhof
November 19, 2024