Emerging Technologies

It's time we embrace an agile approach to regulating AI

Fragmented regulatory regimes for AI make it harder to both tackle risks and capitalize on the technology's vast potential.

Fragmented regulatory regimes for AI make it harder to both tackle risks and capitalize on the technology's vast potential. Image: Getty Images/iStockphoto

Charmaine Ng
Director of Asia Pacific Digital Policy, Schneider Electric, Schneider Electric
Edson Prestes
Full Professor, Federal University of Rio Grande do Sul
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how United Kingdom is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Generative Artificial Intelligence

  • It is commonly said that the speed and scale at which Artificial Intelligence is developing is outpacing our capacity to regulate it.
  • While that may be true, an agile and multi-stakeholder approach can help us close the gap.
  • Globally fragmented regulatory regimes will not mitigate the risks of Artificial Intelligence, and nor will they help us capitalize on its vast potential.

Worldwide regulatory activity seeking to rein in Artificial Intelligence (AI) is picking up speed. While not specific to AI, the oft-repeated refrain is that technology-related legislation lags behind the development of technology, and that policymakers need to do better, faster.

With emerging technology like AI, traditional methods of policymaking fail us. The rapid speed at which the technology develops outpaces the slower speed at which policymakers are able to properly grasp its potential benefits and risks, and by extension, what and how to regulate — even before we account for often-lengthy legislative processes.

This therefore mandates the need for an alternative method of policymaking: agile governance, which means adaptive, human-centered policy that is inclusive and sustainable. Policy development is no longer limited to governments but rather is an increasingly multi-stakeholder effort.

International and multi-stakeholder efforts are key to addressing limits to regulatory capacity when it comes to AI. In fact, in a rare display of international alignment, 28 jurisdictions, including the world’s leading AI nations such as the US, China, and EU, signed the Bletchley Declaration on 1 November 2023, recognizing that AI risks are international and “best addressed through international cooperation”.

Have you read?

Agile and multi-stakeholder approaches to AI governance

Governments are starting to acknowledge their limitations when it comes to regulating such a rapidly evolving technology. The UK and Singapore are two jurisdictions with traditionally strong policymaking capacities contending with the reality that even their policymaking capacities are at their limits.

In an interim report dated March 2023, the UK House of Commons Science, Innovation and Technology Committee expressed uncertainty in current levels of regulatory capacity to facilitate AI governance, recommending a gap analysis of regulatory capacity to implement and enforce the principles outlined in the government’s pro-innovation AI white paper.

In June 2023, Singapore’s Infocomm Media Development Authority (IMDA) highlighted the need to build greater capacity within the AI ecosystem before AI regulation can be effective.

In response, both jurisdictions opted into agile governance, turning to international cooperation and multistakeholder efforts for policy solutions to AI governance. The UK launched the world’s first government-funded AI Safety Institute in November 2023, with a mission to bring together international partners across industry, civil society and academia to help inform both international and domestic policy making for safer AI.

Singapore’s IMDA launched an open-source AI governance testing framework and software toolkit known as AI Verify in May 2022, employing a multistakeholder approach to leverage and cross-pollinate across existing AI expertise to build up AI capacity prior to regulating.

Given limited regulatory capacity and limited supply of AI experts in each jurisdiction, combined with the fact that AI knows no jurisdictional boundaries, policymakers are recommended to work with experts across borders and sectors to establish a common baseline of principles that AI systems should adhere to. Regulating AI at a jurisdiction-by-jurisdiction level requires duplication of scarce resources and will inevitably result in a fragmented global regulatory regime. This will not make AI safer.

The Bletchley Declaration: already breaking new ground

The 28 signatories to the Bletchley Declaration resolved to “sustain an inclusive global dialogue that engages existing international fora…and contributes openly to broader international discussions.” Existing international fora where technical experts have converged to create international risk management frameworks for AI include the US’ National Institute of Science and Technology (NIST), the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC) and the Institute of Electrical and Electronics Engineers (IEEE).

Frequent, open, and widespread multi-stakeholder consultations are recommended as an effective means to garner maximum expert input via crowdsourcing at a global scale. This is particularly true on frontier uses of AI, some of which the EU terms “high risk” and “unacceptable risk” under their proposed draft AI Act and which the US seeks to manage under the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.

International standardization efforts create the opportunity for improved information sharing through interoperable AI systems, allowing society to better harness the benefits of AI while limiting its risks. Combining policymaking with technical expertise must be the way forward to facilitate governance of increasingly complex technology in a manner that supports innovation while at the same time protecting the rights and interests of individuals.

AI is here to stay — that is not in question. Instead of reactively rushing to develop regulations to rein in this extremely powerful technology, we must work to collectively harness its benefits while tracking and limiting its harms. This work is already underway. In For example, in October, IMDA and NIST, for example, completed a collaboration jointly mapping AI Verify against the NIST AI risk management framework.

It is through such international and multi-stakeholder collaboration that we can develop a measured and thoughtful approach to regulating technology, which will always be one step ahead of us.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

AI start-up investment jumps and other digital tech stories to read this month

Cathy Li

July 26, 2024

About Us

Events

Media

Partners & Members

  • Sign in
  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum