Fourth Industrial Revolution

Generative AI is rapidly evolving: How governments can keep pace

One blue and several silver nodes with connecting spokes representing a network: Governments must keep pace with the rapid evolution of generative artificial intelligence (GenAI) while preventing harm.

Governments must keep pace with the rapid evolution of generative artificial intelligence (GenAI) while preventing harm. Image: Getty Images

Karla Yee Amezaga
Lead, Data Policy, World Economic Forum
Rafi Lazerson
Project Fellow, AI Governance Alliance, World Economic Forum
Manal Siddiqui
Project Fellow, AI Governance Alliance, World Economic Forum
  • Governments want to secure generative artificial intelligence (GenAI) innovation and economic opportunities responsibly to prevent and mitigate potential risks.
  • Rapidly evolving GenAI, competing resource needs and a complex global policy landscape can hinder an agile response from governments.
  • A report from the AI Governance Alliance equips policymakers and regulators with a practical framework for resilient and forward-looking GenAI governance that can keep pace.

The economic potential, transformative impact and rapid adoption of GenAI have led governments at all levels to invest in examining how to secure AI innovation in their jurisdictions while mitigating the technology’s risks and preventing harm.

Various task forces, committees and multistakeholder initiatives, such as the UN’s High-Level Advisory Body on Artificial Intelligence, have been set up to study and recommend appropriate government action. Additionally, numerous proposed AI-related bills are under consideration globally and some jurisdictions have already passed regulations, such as in the European Union, China and at the state-level in the United States.

The stakes of getting GenAI governance right are high but the task is no small feat and public-private cooperation is key to achieving this.

Have you read?

Complex challenges undermine effective GenAI governance

Policymakers and regulators contend with several compounding socio-technical and geopolitical complexities that can cloud effective government action. For example:

  • Broad and varied impacts: GenAI is a general-purpose technology that can be applied across contexts, making policy efforts challenging as they attempt to account for all its uses. Additionally, GenAI’s opportunities and harms can vary for individuals and communities based on societal conditions (such as the digital divide and labour exploitation), institutional capacity, structural inequalities and systemic marginalization. For example, a study from the International Labour Organization found that automation was twice as likely to impact the employment of women than men.
  • Rapid pace of technological advancement: GenAI capabilities are evolving rapidly and converging with other emerging technologies, complicating policymakers’ efforts to create governance that can withstand time. For example, in the last few months, image generation has achieved new levels of realism and models have advanced in multimodal capabilities. These advancements open creative opportunities and amplify harms such as disinformation and non-consensual intimate imagery. Looking ahead, GenAI is on track to converge with emerging technologies such as synthetic biology, neurotechnology and quantum computing.
  • Global fragmentation: Limited international sharing of resources critical to GenAI innovation, such as compute and data, may exacerbate frictions between nations and prevent policymakers from enacting meaningful domestic policies for fear of reduced international competitiveness. Additionally, the international AI governance landscape is unpredictable, complex and fragmented, leaving jurisdictions uncertain about how to confidently align domestic policy with a global industry.

A 360-degree approach

Over the past year, the World Economic Forum’s AI Governance Alliance has united industry and government with civil society and academia. The global multistakeholder effort has laid the groundwork for a pioneering framework with implementable strategies for resilient GenAI governance.

As policymakers worldwide continue to learn more about GenAI's promises and concerns, this report provides a roadmap for the development of agile, holistic and adaptable AI policy that will create a responsible AI ecosystem, facilitating economic opportunity while respecting human and individual rights.

“The report provides policymakers and regulators with a comprehensive 360-degree framework for the governance of generative AI, examining existing regulatory gaps, governance challenges unique to various stakeholders, and the evolving needs of this dynamic technology,” says Cathy Li, the Forum’s head of AI, data and metaverse.

Arnab Chakraborty, chief responsible AI officer at Accenture, shared the importance of this report:

“Accenture is proud to serve as knowledge partner to the Forum and the Alliance on this initiative. This report represents a critical milestone in the global conversation around comprehensive generative AI governance, moving beyond calls for responsible governance, alignment of principles and multistakeholder collaboration, and towards concrete mechanisms for action.”

Nita Farahany, a professor at Duke University and an alliance member, highlighted the report's practical format: “The three-pillar structure (Harness Past, Build Present, Plan Future) provides a clear roadmap for addressing the complexities of [GenAI]. Each section is well organized and offers actionable insights, which will be invaluable for policymakers and regulators.”

Another member, Miho Naganuma, a senior executive professional at the NEC Corporation, highlighted how the report will benefit industry members by “enabling their communication with policymakers and regulators.”

Have you read?

    1. Harness past

    Effective national strategies for promoting responsible AI innovation require timely assessment of the current regulatory levers to tackle the technology’s unique challenges and opportunities. Before developing new AI regulations or authorities, governments should:

    • Assess existing regulations for conflicts or gaps caused by GenAI, aligning them with the goals of different regulatory frameworks.
    • Clarify responsibility allocation through legal and regulatory precedents and reinforce efforts where gaps are found.
    • Evaluate existing regulatory authorities' capacity to tackle GenAI challenges and consider the trade-offs for centralizing authority within a dedicated agency.

    The three-pillar structure (Harness Past, Build Present, Plan Future) provides a clear roadmap for addressing the complexities of [GenAI].

    Nita Farahany, Professor of Law & Philosophy, Duke University

    2. Build present

    Policymakers and regulators cannot ensure resilient GenAI governance alone – industry, civil society and academia must also provide input. Governments should use more than just regulations to cultivate whole-of-society GenAI governance and cross-sector knowledge sharing by:

    • Addressing challenges unique to each stakeholder group.
    • Cultivating multistakeholder knowledge-sharing and encouraging interdisciplinary thinking.
    • Leading by example by adopting responsible AI practices.

    3. Plan future

    GenAI’s capabilities are rapidly evolving alongside other technologies. Incorporating preparedness and agility into GenAI governance is necessary alongside cultivating international cooperation. Governments must develop national strategies that consider limited resources and global uncertainties and that feature foresight mechanisms to adapt policies and regulations to technological advancements and emerging risks. This necessitates the following key actions:

    • Targeted investments for AI upskilling and recruitment in government.
    • Horizon scanning of GenAI innovation, foreseeable risks associated with emerging capabilities, and convergence with other technologies and human interactions.
    • Foresight exercises to prepare for multiple possible futures.
    • Impact assessment and agile regulations to prepare for the downstream effects of existing regulation and future AI developments.
    • International cooperation to align standards and risk taxonomies and facilitate sharing knowledge and infrastructure.

    The 360-degree framework is designed to support policymakers and regulators in developing holistic and durable GenAI governance. However, the specific implementation of the framework will differ between jurisdictions, depending on their national AI strategy, the maturity of AI networks, economic and geopolitical contexts and individuals’ expectations and social norms.

    Policymakers, industry leaders, academics, and civil society are invited to reference the recommendations within this framework to drive GenAI, which contributes positively to our world and ensures a prosperous, inclusive, and sustainable future for all.

    Don't miss any update on this topic

    Create a free account and access your personalized content collection with our latest publications and analyses.

    Sign up for free

    License and Republishing

    World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

    The views expressed in this article are those of the author alone and not the World Economic Forum.

    Stay up to date:

    Artificial Intelligence

    Share:
    The Big Picture
    Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
    World Economic Forum logo

    Forum Stories newsletter

    Bringing you weekly curated insights and analysis on the global issues that matter.

    Subscribe today

    What companies do now will determine their future in the Intelligent Age

    Mihir Shukla

    December 23, 2024

    The rise of gender-inclusive agritech and why it matters

    About us

    Engage with us

    • Sign in
    • Partner with us
    • Become a member
    • Sign up for our press releases
    • Subscribe to our newsletters
    • Contact us

    Quick links

    Language editions

    Privacy Policy & Terms of Service

    Sitemap

    © 2024 World Economic Forum