3 essential features of global generative AI governance
Countries’ generative AI regulatory differences are rooted in their legal and administrative systems and political context. Image: Flickr/Jernej Furman
Ao Kong
Senior Programme Advisor and Chief of Resource Mobilization and Strategic Communications, United Nations Technology Bank for Least Developed CountriesDr Shaoshan Liu
Director, Embodied Artificial Intelligence, Shenzhen Institute of Artificial Intelligence and Robotics for Society- The rapid growth of generative AI brings opportunities and risks. A global regulatory mechanism for this emerging tech is now needed.
- To do so, countries’ regulatory differences, stakeholders’ incentives and risk appetite, and AI’s open-source and self-generative nature must all be taken into account.
- Here’s what we can learn from approaches in the US, the EU and China, which reflect different AI regulatory guiding principles and priorities.
Given generative AI’s ubiquity across domains and the risks it brings – including job displacement, deep fakes and automatized weapons – it’s time to contemplate a global AI regulatory mechanism.
It’s a formidable challenge to design a genuinely “fit for purpose” mechanism, however. Countries’ regulatory differences, stakeholders’ considerations for incentives and trade-offs, and AI’s open-source and self-generative nature are all vital factors to think about.
To analyse how regulatory differences are rooted in countries’ legal and administrative systems and political context, let’s dissect the current AI regulatory approaches in the US, EU and China.
How is the World Economic Forum creating guardrails for Artificial Intelligence?
The US: industry-specific and all-of-government strategy
As the US forges ahead with a cohesive AI regulatory strategy, the secured voluntary commitments from seven leading AI companies issued by the Biden Administration, together with the recent executive order, indicate the government relies on industry-specific insights and industries' willingness to effectively govern their own AI applications. On the other hand, it reflects the beginning of an whole-of-government approach to foster a uniform understanding and application of AI policies across various sectors.
This approach benefits from the granular knowledge that industry leaders provide, which is vital for the development of effective, sector-appropriate AI regulations. Meanwhile, it requires the National Institute of Standards and Technology to bolster this strategy by setting clear AI standards, promoting consistency across industries and requesting additional guidance developed to advance equity and civil rights.
The industry-specific approach is rooted in the legislative system in the US, which grants each industry the autonomy to propose regulatory laws and trusts experienced industry practitioners with possessing the most in-depth and thorough understanding of a particular sector. However, the downside of oligopolistic control by a few dominant players and a fragile foundation of good-faithed self-governance by industries need to be counter-balanced by proper government intervention.
The EU’s GDPR-aligned strategy
The EU’s AI Act follows the framework established by its predecessor, the General Data Protection Regulation (GDPR). The Act proposes a comprehensive structure for AI regulation, spanning from defining requirements for high-risk AI systems to the establishment of a board.
It emphasizes user safety and fundamental rights, mandates transparency of AI systems and enforces strict post-market monitoring rules for AI providers. AI products are divided into distinct categories, with each being subject to different levels of regulatory requirements. Low-risk AI systems, like spam filters or game algorithms, may face minimal regulations to maintain innovation and usability; while high-risk applications, such as those in biometric identification and critical infrastructure, are bound by extensive obligations, including stringent risk management and user transparency requirements.
To implement the Act, the EU established a centralized regulatory body, namely the Committee on Artificial Intelligence. It is responsible for detailing the legal framework for AI, interpreting and enforcing the AI Act’s regulations and supervising high-risk AI systems to ensure uniform application across the union.
This legislative initiative promotes a human-centric and ethical AI environment, but a single, centralized regulatory entity, despite attempting a comprehensive approach, may not be able to respond to the rapidly changing AI landscape.
China: state control in AI regulation
China embeds its AI regulatory strategy within a framework of stringent state control. AI is viewed not just a technological advancement, but as an integral part of its economic and social infrastructure that warrants strategic and safe dissemination.
AI regulations feature oversight responsibilities similar to those in China’s Cybersecurity Law, extending them beyond Internet Service Providers and social media platforms to generative AI service providers. This is to ensure a strategic and measured roll out of AI and its applications, while imposing restrictions to curb industry’s excessive dominance.
When executed in accordance with China’s national development plan, this state-controlled AI regulatory model could help the development and deployment of AI to align with the country’s developmental phase. This is particularly compelling for developing countries. The challenge resides in striking the right balance: implementing a regulatory mechanism robust enough to safeguard public interest, while still being flexible enough to encourage innovation and allow for industry experimentation.
‘Fit for purpose’ for global AI regulation
A global AI regulatory mechanism needs to harmonize countries’ regulatory approaches and complementary features.
Such a mechanism also requires a broad and diverse representation to ensure balanced views. For example, while advanced economies may prioritize risk mitigation and privacy protection, developing nations might seek to take advantage of AI to boost economic growth and to address urgent societal challenges. Similarly, while industries’ influence should be curbed, they should be given enough incentives to make investments in advancing computing power.
Furthermore, AI’s self-generative nature may necessitate an agile, responsive digital public goods governance mechanism. An open-sourced platform, resembling GitHub, might fit more than a conventional governance model which relies on periodically convened sessions for monitoring and reporting.
Based on the above, a fit-for-purpose global governance mechanism for AI should have the minimum features of:
- upholding safety, human dignity and equity standards;
- bridging regulatory differences;
- ensuring diverse representation across geopolitical, technical and socioeconomic profiles;
- operating on open-sourced premises.
Furthermore, international organizations like the UN and public institutions could play a crucial role in aligning AI’s development and deployment with the Sustainable Development Goals to support a shared vision for peace and prosperity for people and the planet, now and into the future.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Generative Artificial Intelligence
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Emerging TechnologiesSee all
Keyzom Ngodup Massally and Jennifer Louie
December 3, 2024