Emerging Technologies

To realise the full potential of AI, we must regulate it differently

An artificial Intelligence project utilizing a humanoid robot from French company Aldebaran and reprogramed for their specific campus makes its debut as an assistant for students attending Palomar College in San Marcos, California, U.S. October 10, 2017. REUTERS/Mike Blake - RC1CB731A6B0

Image: REUTERS/Mike Blake - RC1CB731A6B0

Gordon Morrison
Director of EMEA Government Affairs, Splunk
Kay Firth-Butterfield
Senior Research Fellow, University of Texas at Austin
This article is part of: Annual Meeting of the New Champions

It is becoming clear to governments and organizations across the world that artificial intelligence (AI) will have a positive impact on citizens and on industry. It is an unstoppable force, and it is already delivering benefits.

AI will affect and permeate all levels of society, delivering significant economic opportunity to those who embrace the historic technological revolution it brings. It will transform industries as diverse as the legal profession, medicine and cybersecurity. Algorithms will help humans to be healthier, safer and more productive. They are already allowing us to analyze huge data sets, diagnose complex diseases and respond to cyber threats.

However, there are challenges associated with AI. Industries and governments need to be aware of its potential negative impacts on citizens. They are right to be concerned about the unethical use of data, potential job displacement and, in some use cases, safety implications.

We will see AI deployed in a range of settings, from autonomous systems such as cars and cargo ships to clinical support systems and assistance in employment selection. These are only a few of the potentially transformative ways it will be deployed, but its introduction will present challenges for us to overcome.

The issues are numerous, but they can all be addressed. Bias unintentionally introduced into algorithms by a human developer could lead to poor or even unsafe decisions made in biometric recognition applications. AI systems that make recommendations for mortgage applications, parking fines and drug prescriptions will need to be ethically sound and systemically safe.

Have you read?

AI systems may also be presented with life and death decisions. Algorithms deployed in an emergency services scenario could conceivably have to make a decision whether to save a mature adult or a child. The potential use of AI in critical national infrastructure systems also introduces complex issues. For example, how are we to ensure we safely procure, engineer, govern and monitor the use of AI in the power generation sector?

As well as ethical and safety issues, we also have to think about AI’s impact on privacy. For good reason, algorithms may be given access to huge data sets from multiple sources. In many applications, the more diverse the data set being analyzed, the better and more accurate the outcome. However, as we have seen recently, we must be transparent about what we are using data for, and ensure that citizens consent to its use. We also need to ensure that AI applications respect well established data privacy rights established across the world.

The first priority of most governments is the protection of their citizens. They naturally look for ways to adapt and embrace innovative technology, exploring how their nation can take a leading role. Traditionally, governments use tools such as regulation and standards to ensure conventional technology is developed in a safe and beneficial way. For AI, we need to consider whether a traditional approach is still suitable, and whether it can adapt quickly enough to empower innovation.

Our task is to develop adaptable and flexible ‘virtual’ or ‘soft’ regulation that gives the right level of protection to citizens while making it possible for industry to innovate and deliver the full potential of AI. Through development of best practice guidance, we can ensure organizations can innovate without being stifled through well-meaning but inflexible approaches. By doing this, innovators will understand the limits and expectations that governments set for the responsible design, development and deployment of AI. We can also seize the opportunity to adopt and promote the concept of ‘responsible AI’, ensuring it is used and deployed consistently, with appropriate ethical standards.

Guidance will have to be dynamic in nature and developed at the same rate at which the technology advances. It must also be specific to the application or sector in which the technology is deployed. Guidance or virtual regulation written for medical systems may not be suitable for applications in the entertainment or retail sectors.

AI is also not easy to define. It is not a unique and specific technology, and perhaps not well understood by the average citizen. Any guidance must cover all potential implementations of AI, from basic data analytics and personal assistant tools, to machine learning and more generalized AI systems. It may also be an opportunity to help citizens understand what AI is, how it can benefit them and how it is being monitored and controlled.

We think this task is exciting and critically important. With strong guiding principles, we can motivate organizations to innovate in a race to the top. They will be able to deliver the huge benefits and promises of AI within acceptable boundaries. This will reassure citizens across the world that organizations are delivering this positive revolution in a respectful and ethical way.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Artificial Intelligence

Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

Here’s why it’s important to build long-term cryptographic resilience

Michele Mosca and Donna Dodson

December 20, 2024

How digital platforms and AI are empowering individual investors

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum