Emerging Technologies

3 ways AI could threaten our world, and what we need to do to stay safe

Robots perform a syncronised dance at the 2017 World Robot Conference in Beijing, China August 22, 2017.   REUTERS/Thomas Peter

A new report highlights three ways that AI could be used for harm, physically, digitally and politically. Image: REUTERS/Thomas Peter

Rob Smith
Writer, Forum Agenda

Artificial intelligence (AI) could dramatically improve our lives, positively impacting everything from healthcare to security, governance and the economy. But almost all technologies can be used for ill as well as for good.

The Malicious Use of Artificial Intelligence report, compiled by experts from a number of institutions including the University of Cambridge and research firm OpenAI, argues that in the wrong hands, AI could be exploited by rogue states, terrorists and criminals.

The report outlines three areas – physical, digital and political – where AI is most likely to be exploited, and describes scenarios of how AI attacks might play out.

1. Remote-controlled car crashes

The biggest concern involves AI being used to carry out physical attacks on humans, such as hacking into self-driving cars to cause major collisions.

“If multiple robots are controlled by a single AI system run on a centralized server, or if multiple robots are controlled by identical AI systems and presented with the same stimuli, then a single attack could also produce simultaneous failures on an otherwise implausible scale,” the report states.

University of Texas Austin Professor Dr. Peter Stone, who is part of a team that recently developed a new algorithm for improving the way robots and humans communicate, thinks the report’s warnings should be taken seriously, but that the situation isn’t new or unique to autonomous vehicles.

“If someone today were to change all traffic signals in a city to be simultaneously green, disaster would ensue,” Dr. Stone tells us. “And the fact that our electricity grid is fairly centralized makes us vulnerable to large-scale blackouts.” According to Dr. Stone, the proper response would be stronger security measures, as well as redundancy -- the provision of backup capacity - and decentralization of decision-making.

2. Sophisticated phishing

In the future, attempts to access sensitive and personal information from an individual could be carried out by AI almost entirely.

“These attacks may use AI systems to complete certain tasks more successfully than any human could,” the report says, adding that fraud or identity theft could become more refined and effective as AI evolves.

If most of the research and message generation typical of a phishing scam could be handled by AI, more people would be duped by this activity.

AI could impersonate people’s real contacts, using a writing style that mimics the style of those contacts, making it harder to spot the scam.

Professor of Robotics at Carnegie Mellon University Illah Nourbakhsh says that since AI is rapidly evolving, we need more rapid responses to deal with its risks. “The real challenge is considering policy moves to maximize the good and minimize the bad,” Nourbakhsh says. “Just as human scam artists find ever more sophisticated and nuanced ways to trick people out of their money using online scams, so AI-powered malicious actors will continuously find new pathways into our data and into our pocketbooks.”

3. Manipulating public opinion

Fake news and fake videos generated by bots and AI could have a big impact on public opinion, disrupting all layers of society, from politics to media. The use of social media bots spreading fake news was already a reality during the 2016 US presidential campaign.

Well-trained bots could create a strategic advantage for political parties, and almost work as artificially intelligent propaganda machines that thrive in low-trust societies, the report claims.

This goes further than just the spread of fake text content. "AI systems can now produce synthetic images that are nearly indistinguishable from photographs, whereas only a few years ago the images they produced were crude and obviously unrealistic," the report says.

 Increasingly realistic synthetic faces generated by AI
Increasingly realistic synthetic faces generated by AI Image: The Malicious Use of Artificial Intelligence Report

Wendell Wallach, Chair of the World Economic Forum Global Future Council on Technology, Values and Policy, and author of "A Dangerous Master: How to keep technology from slipping beyond our control", says that social media is already now combining insights into human psychology and how to manipulate opinions, and it will become more sophisticated over the coming years.

“These tools will not only be used as propaganda by states to confuse and destabilize competing powers, but also as new methods employed by political leaders and political parties for tracking, manipulating and managing citizens within a country.”

Mitigating the risks

In response to these threats, and the myriad others outlined within the report, the experts have outlined four “high-level” recommendations:

1. Policy-makers and technical researchers need to work together now to understand and prepare for the malicious use of AI.

2. Whilst AI has many positive applications, it’s a dual-use technology and researchers and engineers should be mindful of and proactive about the potential for its misuse.

3. Best practices can and should be learned from disciplines with a longer history of handling dual-use risks, such as computer security.

4. The range of stakeholders engaging with preventing and mitigating the risks of malicious use of AI should be actively expanded.

“A new form of agile and comprehensive governance will be required both internationally and nationally to maximize the benefits of AI, mitigate the risks, and fulfill these four high-level recommendations,” says Wendell Wallach.

At the World Economic Forum’s Center for the Fourth Industrial Revolution, Head of Artificial Intelligence and Machine Learning Kay Firth-Butterfield is working on addressing some of the steps outlined in the report to mitigate the risks: “We have co-designed a project to help researches and engineers be mindful of the misuse of AI by ensuring that teaching on culturally relevant ethical design of AI is available to any student and post-grad designing, developing and creating AI,” Firth-Butterfield says.

“We’re also working with governments and Boards of Directors to create best practices for the commissioning and use of AI,” she explains. “We want to imagine a new type of regulator which can address the risks in an agile way, to promote and encourage use of the technology that’s beneficial for the whole of humanity and the planet.”

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Internet Governance

Related topics:
Emerging TechnologiesFourth Industrial Revolution
Share:
The Big Picture
Explore and monitor how Internet Governance is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

Here’s why it’s important to build long-term cryptographic resilience

Michele Mosca and Donna Dodson

December 20, 2024

How digital platforms and AI are empowering individual investors

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum