Emerging Technologies

Europe has taken a major step towards regulating AI

EU lawmakers agree to changes in draft AI rules, including a ban on biometric surveillance and disclosure requirements for generative AI systems.

A ban on AI in biometric surveillance is among the changes to EU rules. Image: AFP

Foo Yun Chee
Author, Reuters
Supantha Mukherjee
Author, Reuters
  • EU lawmakers have agreed to changes in draft rules on artificial intelligence.
  • The changes include a ban on the use of AI in biometric surveillance and a requirement for generative AI systems to disclose AI-generated content.
  • The amendments to the draft rules are still subject to negotiations between EU lawmakers and EU countries.
  • The EU is hoping that the draft rules will become law by the end of 2023.

European Union lawmakers on Wednesday agreed changes to draft artificial intelligence rules to include a ban on the use of the technology in biometric surveillance and for generative AI systems like ChatGPT to disclose AI-generated content.

The amendments to the EU Commission's proposed landmark law aimed at protecting citizens from the dangers of the technology could set up a clash with EU countries opposed to a total ban on AI use in biometric surveillance.

The rapid adoption of Microsoft-backed OpenAI's ChatGPT and other bots has led top AI scientists and company executives including Tesla's Elon Musk and OpenAI's Sam Altman to raise the potential risks posed to society.

"While big tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose," said Brando Benifei, co-rapporteur of the bill.

Discover

How is the World Economic Forum ensuring the responsible use of technology?

Among other changes, European Union lawmakers want any company using generative tools to disclose copyrighted material used to train its systems and for companies working on "high-risk application" to do a fundamental rights impact assessment and evaluate environmental impact.

Systems like ChatGPT would have to disclose that the content was AI-generated, help distinguish so-called deep-fake images from real ones and ensure safeguards against illegal content.

Microsoft (MSFT.O) and IBM (IBM.N) welcomed the latest move by EU lawmakers but looked forward to further refinement of the proposed legislation.

"We believe that AI requires legislative guardrails, alignment efforts at an international level, and meaningful voluntary actions by companies that develop and deploy AI," a Microsoft spokesperson said.

The lawmakers will now have to thrash out details with EU countries before the draft rules become legislation.

'AI is intrinsically good'

While most big tech companies acknowledge the risks posed by AI, others like Meta (META.O), which owns Facebook and Instagram, have dismissed warnings about the potential dangers.

"AI is intrinsically good, because the effect of AI is to make people smarter," Meta's chief AI scientist Yann LeCun said at a conference in Paris on Wednesday.

In the current draft EU law, AI systems that could be used to influence voters and the outcome of elections and systems used by social media platforms with over 45 million users were added to the high-risk list.

Meta and Twitter will fall under that classification.

"AI raises a lot of questions – socially, ethically, economically. But now is not the time to hit any 'pause button'. On the contrary, it is about acting fast and taking responsibility," EU industry chief Thierry Breton said.

He said he would travel to the United States next week to meet Meta CEO Mark Zuckerberg and OpenAI's Altman to discuss the draft AI Act.

The Commission announced the draft rules two years ago, aiming to setting a global standard for a technology key to almost every industry and business as the EU seeks to catch up to AI leaders the United States and China.

Have you read?
  • How to tell if artificial intelligence is working the way we want it to
  • Artificial intelligence - good or bad for public health?
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Emerging Technologies

Related topics:
Emerging TechnologiesGeographies in Depth
Share:
The Big Picture
Explore and monitor how Justice and Law is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

We asked 6 tech strategy leaders how they're promoting security and reliability. Here's what they said

Daniel Dobrygowski and Bart Valkhof

November 19, 2024

Shared Commitments in a Blended Reality: Advancing Governance in the Future Internet

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum