Emerging Technologies

How to regulate AI without stifling innovation

Calls in the AI space to expand the scope of regulation could lead to less innovation and worse product safety.

Calls in the AI space to expand the scope of regulation could lead to less innovation and worse product safety. Image: Mojahid Mottakin/Unsplash

David Alexandru Timis
Global Communications Manager, Generation
This article is part of: Annual Meeting of the New Champions

Listen to the article

  • With AI in such early stages of development and application, it is important to focus on the actual risk of a certain use of AI.
  • Calls in the AI space to expand the scope of regulation, classifying things like general purpose AI as ‘high risk’, could lead to less innovation and worse product safety.
  • What the debate on AI regulation often misses are the number of regulations that already apply to AI.

Artificial intelligence (AI) has captured the public imagination. In the past few months, the release of OpenAI’s GPT-4 has shown the rapid advancement and progress made on large language models (LLMs), sparking increased interest in the development of AI and what it can do. It has also increased fears about the powerful nature of the technology at play.

Have you read?

AI has the potential to be transformative for the way we live and work, in the same way that technological leaps in recent decades have placed an ever-greater number of possibilities at our fingertips. The societal shift that AI could precipitate may even be as significant as the birth of the internet. As such, it is an area that is simply too important not to regulate.

AI regulation as risk to innovation

As AI gains greater prominence and becomes more widely used in our daily lives, regulations will be critical for ensuring the ethical and responsible development and use of what is a genuinely transformative technology. That is why the EU’s AI Act is much needed.

Yet recent calls in the AI space have sought to expand the scope of the regulation, classifying things like general purpose AI (GPAI) as inherently ‘high risk’. This could cause huge headaches for the innovators trying to ensure that AI technology evolves in a safe way.

To use an example, classifying GPAI as ‘high risk’, or providing an additional layer of regulation for foundational models without assessing their actual risk, is akin to giving a speeding ticket to a person sitting in a parked car, regardless of whether it is safely parked and the handbrake is on. Just because the car can in theory be deployed in a risky way.

It is for this reason that on 26-28 April 2023, the summit “Responsible AI Leadership: A Global Summit on Generative AI” took place at the World Economic Forum’s Centre for the Fourth Industrial Revolution based in the Presidio in San Francisco, where 30 action-oriented recommendations were developed to guide technical experts and policymakers on the responsible development and governance of generative AI systems.

Existing AI regulations

Furthermore, what the debate on AI regulation often misses are the number of regulations that already apply to AI, and which are already being deployed by authorities with concerns about certain features of AI. The best example of this is the Italian Data Protection Authority limiting access to ChatGPT on the basis of the GDPR, without the AI Act being in force yet. Other examples include the Digital Services Act, the Copyright Directive, and proposed legislation on Political Ads, all of which will inherently apply to AI.

In the past, new technologies or inventions have largely been unregulated at the time they reach the market. To use the example of the car again, over the course of more than a century, product regulation that has been brought in over time has led to the development of modern vehicles with transformative safety features. In the beginning, these safeguards simply did not exist. Seatbelts and airbags had not yet been invented.

AI is not starting from the same position. There are several regulations which cover AI or components of AI already. This is the way that it should be: a foundational, safety-first approach to regulation which covers all components of AI is essential.

Among the existing regulations impacting AI is the General Data Protection Regulation (GDPR). GDPR is robust and includes wide-ranging powers, governing the processing of the personal data of Europeans while imposing stringent requirements on the collection and use of that data. Central to the GDPR are the principles of transparency and consent, and even when data is being used by AI systems, citizens’ rights under the GDPR remain the same.

Discover

How is the World Economic Forum ensuring the responsible use of technology?

Indeed, it is these very concerns about data privacy under the GDPR that have allowed the Italian data protection authority to move to temporarily ban Chat-GPT in recent weeks as it gathers more information on how the model uses data to generate answers to its questions. Though the regulator’s concerns to some extent misrepresent the nature of generative AI, this goes to show that even a completely new AI technology can be covered in a responsible way by the same obligations that are already firmly in place.

That does not, of course, mean that further regulation is not needed. Genuinely sensible regulatory parameters, which balance the need for oversight and safe product development with the ability to innovate, are desperately needed. With the right privacy and safety protections in place, the limits to what AI can achieve to benefit society could be boundless.

But with AI in such early stages of development and application, it is important to focus on the actual risk of a certain use of AI. If we’re speaking of a person's health, critical infrastructure or real-time biometrics, classifying these as high risk is necessary and sound. Classifying GPAI as high risk on the other hand, while knowing that most of the applications don’t actually pose a high risk, could lead to less innovation, worse product safety and the EU falling behind in this critical AI race, in which the US and China have a considerable lead.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Tech and Innovation

Share:
The Big Picture
Explore and monitor how Innovation is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

5 ways to achieve effective cyber resilience

Filipe Beato and Jamie Saunders

November 21, 2024

Why AI is Southeast Asia's new engine for profitable growth

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum