Fourth Industrial Revolution

Two mistakes about the threat from artificial intelligence

Luke Muehlhauser
Executive Director, Machine Intelligence Research Institute

Recent comments by Elon Musk and Stephen Hawking, as well as a new book on machine superintelligence by Oxford professor Nick Bostrom, have the media buzzing with concerns that artificial intelligence (AI) might one day pose an existential threat to humanity. Should we be worried?

Let’s start with expert opinion. A recent survey of the world’s top-cited living AI scientists yielded three major conclusions:

  1. AI scientists strongly expect “high-level machine intelligence” (HLMI) —  that is, AI that “can carry out most human professions at least as well as a typical human” — to be built sometime this century. However, they think it’s very difficult to predict when during the century this will happen.
  2. Assuming HLMI is built at some point, AI scientists expect machine superintelligence — AI that “greatly surpasses the performance of every human in most professions” — to follow within the next few decades.
  3. AI scientists are divided about whether HLMI will have a positive or negative impact on humanity. Respondents assigned probabilities to five different classes of outcomes, ranging from “existential catastrophe” to “extremely good.” The outcome category with the highest average probability assigned was “on balance good” (40%). The average probability assigned to “existential catastrophe” was just 8%.

First, should we trust expert opinion on the timing of HLMI and machine superintelligence? After all, AI scientists are experts at building practical AI systems like military drones, Apple’s Siri, and Google’s search engine and self-driving car — they’re not experts at long-term forecasting. Moreover, their track record at predicting AI progress is mixed at best. So perhaps we shouldn’t weigh surveys like this too heavily.

But can we do better than expert opinion? A retrospective analysis of more than 1,000 technology forecasts suggests that models and quantitative trend analyses tend to outperform expert opinion for both short-term and long-term forecasts. Unfortunately, those methods are difficult to use for forecasting high-level machine intelligence. Predictions based on Moore’s Law and similar trends in computing hardware have fared relatively well, but these trends tell us little about the timing of HLMI because HLMI depends not just on what hardware is available but also on hard-to-predict breakthroughs in AI algorithms.

Given this uncertainty, we should be skeptical both of confident claims that HLMI is coming soon and of confident claims that HLMI is very far away.

Second, what about social impact? Before we draw any conclusions, it’s important to clear up two common misconceptions about the standard case for AI as an (eventual) existential threat:

  1. The case for AI as an existential threat worth addressing today doesn’t assume HLMI is coming soon, nor that AI capabilities improve “exponentially.”

Nick Bostrom and I are among those who think AI is an (eventual) existential threat worth addressing today. But we don’t think AI capabilities progress “exponentially” in general, and in fact we both have later timelines for HLMI than AI scientists tend to have. We advocate more research to understand and mitigate these risks not because we think HLMI will likely arrive in the next two decades, but because we think ensuring positive outcomes from smarter-than-human AI systems will likely require several decades of research and coordination effort, just as is the case with (e.g.) managing climate change. For lists of technical and strategic research studies that could be pursued today, see here and here.

  1. The concern isn’t that HLMI will suddenly “wake up” and want to wipe us out. Rather, the concern is that wiping out humanity will be a side effect of HLMI rationally using all available resources in pursuit of its goals.

HLMI probably won’t suddenly “wake up,” and it probably won’t be malicious. Those scenarios anthropomorphize AI systems too much. But — unlike today’s simple AI systems — HLMI will be autonomously making and executing plans in pursuit of its goals, whether those goals are to maximize paperclip production or to win an election or to detect potential terrorists around the globe. And as UC Berkeley computer scientist Stuart Russell, co-author of the world’s leading AI textbook, has explained, the problem is that:

  1. [AI goals] may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.
  2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.

Or, as my MIRI colleague Eliezer Yudkowsky once wrote, “The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else.”

But these points alone don’t establish that AI presents a serious existential threat to humanity. Maybe the technical safety challenges will look less daunting after a few more decades of AI research. Maybe when the risk draws nearer, the world will pull together and just solve the problem like we did for ozone-damaging chlorofluorocarbons following the Montreal Protocol.

The good news is that the debate about AI risk has begun in earnest, and small but increasing resources are devoted to learning how serious the risk is and what we can do to mitigate it. Perhaps the study of long-term AI risk has reached the stage that chlorofluorocarbon studies had reached by 1980, and we will soon learn that the challenge is not as difficult to overcome as the pessimists have feared. Or perhaps we’ve reached the stage that climate change studies had reached by 1975, when there was no consensus but a few scientists devoted to the issue had gathered the initial case for anthropogenic warming. Time will tell.

Author: Luke Muehlhauser is Executive Director of the Machine Intelligence Research Institute and a member of the World Economic Forum Global Agenda Council on Artificial Intelligence & Robotics.

Image: The hand of humanoid robot AILA (artificial intelligence lightweight android) operates a switchboard during a demonstration by the German research centre for artificial intelligence at the CeBit computer fair in Hanover March, 5, 2013. REUTERS/Fabrizio Bensch.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Innovation

Share:
The Big Picture
Explore and monitor how Innovation is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

2:20

Quantum for Society Challenge

Explainer: What is digital trust in the intelligent age?

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum