Emerging Technologies

Generative AI risks: How can chief legal officers tackle them?

Published · Updated
Leading organizations must learn how to navigate genAI risks and opportunities.

Leading organizations must learn how to navigate genAI risks and opportunities. Image: Getty Images/iStockphoto.

Kenneth White
Manager, Communities and Initiatives; Institutional Governance, World Economic Forum
Nivedita Sen
Initiatives Lead, Institutional Governance, World Economic Forum
This article is part of: World Economic Forum Annual Meeting
  • GenAI is becoming a business priority – helping to optimize costs, improve quality, increase speed and achieve sustainability goals.
  • In a rapidly evolving field organizations must balance these opportunities with risk mitigation.
  • Agile organizations are developing enterprise-level governance frameworks for deploying genAI, factoring a risk-assessment matrix.

Generative AI (genAI) is predicted to add $4.4 trillion annually to the global economy, according to McKinsey. As boards and the C-suite urgently try to understand the landscape of opportunities to deploy genAI across their business operations and not lose out on a “once in a generation” technological evolution, risks oversight becomes a key consideration.

Compliance, operational, reputational, and regulatory risks – the ramifications of genAI are as intricate as the opportunities. Quality control, accuracy and misinformation are among key concerns emerging around the use of genAI. The quality of genAI output is dependent on the quality of the data input, amongst other things. Misinformation and accuracy of AI models are other challenges. A recent New York case where fake AI-generated case laws were cited, illustrates the challenge of “AI hallucinations”.

Have you read?

Organizations are navigating a complex web of regulatory responses to rapidly evolving technology. In addition to the role of governments in developing AI rules and regulations, there is a growing awareness of the important role of industry self-regulation in safeguarding the interests of citizens and society, in addition to fiduciary obligations to shareholders. While the balance between government regulations and private sector investment in self-regulation varies across countries, there is a growing push for regulatory agencies (e.g. NIST's AI Safety Institute Consortium) to collaborate with industry in the development of genAI governance standards.

We pooled insights from the World Economic Forum’s Chief Legal Officers (CLO) community on how leading enterprises are navigating genAI risks and opportunities. As an organization’s most senior executive responsible for legal strategy and corporate governance, CLOs are uniquely positioned to advise their boards on AI-related risks, through risk assessment frameworks, and develop enterprise-wide mitigation policies while strategically meeting business priorities.

'Intellectual property'

Inderpreet Sawhney, Group General Counsel & Chief Compliance Officer, Infosys

Unauthorized use of copyrighted material to train large language models is an emerging intellectual property risk in the development of genAI products. On similar lines, licenses of several AI models place restrictions on usage of AI generated output for training or fine tuning another AI model. Interestingly, this trend is seen in licensing practices of both open and closed model providers. User organizations should take a careful look at this other new age risk created by AI.

The other key issue about intellectual property ownership of AI-generated content is that it requires legal certainty across the jurisdictions. Absence of such ownership will have varying degrees of commercial consequences for different industry sectors. The laws need to be harmonized across countries by engaging all relevant interest groups.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

'Lack of harmonized regulation'

Andreas Hoffmann, General Counsel & Head of Legal and Compliance, Siemens

Gaps and lack of clarity on the applicability of existing regulations, as well as divergence of emerging AI regulations pose another risk for global enterprises. A patchwork of new regulations is emerging on the horizon, including the EU's AI Act and the US Executive Order on AI Safety and Security issued on 30 October 2023. The divergence is not only at the national level, but also municipal and sector-wide levels. In the US alone, over 25 states introduced AI related legislation in 2023. The polycentric and fragmented nature of the AI regulatory landscape pose a serious risk for enterprises operating in diverse sectors and geographies.

'Compliance as a catalyst'

Ulrike Schwarz-Runer, Managing Director & Senior Partner, General Counsel, BCG

While the current regulatory landscape surrounding AI remains fragmented around the world, it also offers unique opportunities for organizations to actively shape best practices. Companies that embrace responsible AI compliance programmes early on will be able to ensure that they lead with integrity. Due diligence, compliance programmes and documentation coupled with testing and learning are necessary efforts on this AI journey. They should be viewed as catalysts for scale and growth and will be integral to ensure trust, quality and safety of these solutions.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

'Building trust'

Sabastian Niles, President & Chief Legal Officer, Salesforce

Manifesting generative AI's remarkable economic and humanitarian potential goes beyond technological innovation. It's all about trust. Within their own enterprises and industries, the private sector can model a trust-first approach to maximizing AI’s benefits. This means taking a "both-and" approach that prioritizes the sustained success of internal and external customers and stakeholders: in anticipating and managing risk, we also apply a lens requiring AI to be an enabler of smart growth, increased productivity and wise decision-making, and embrace the upskilling, reskilling, and talent expansion potential ahead of us.

But harnessing the power of AI in a trusted way will also require regulators, businesses, and civil society to work together and abide by guidelines and guardrails:

Prioritizing transparency: People should know when they’re interacting with AI systems and have access to information about how AI-driven decisions are made.

Protecting privacy: Since AI is based on data, promoting and protecting the quality, integrity and proper collection and use of that data is critical to build trust. Industry standards, harmonized regulation, and common sense legislation should support privacy and customer control of their own data.

Developing risk-based frameworks that address the entire value chain: AI is not one-size-fits-all, and effective frameworks should protect citizens while encouraging inclusive innovation.

As noted above, mere compliance with legal mandates is insufficient. Companies need to aim higher to build trust by understanding and working to exceed expectations of customers and stakeholders. While this pursuit may involve navigating tradeoffs, by championing internal initiatives like AI governance councils to review proposed AI use cases and instituting policies for ethical usage, Chief Legal Officers can play a central role in steering organizations toward prioritizing trust, accountability, and creating a positive impact in this era of AI innovation

Loading...

'Enterprise-level policies'

Rishi Varma, Senior Vice-President & General Counsel, Hewlett Packard Enterprise

Companies are developing mitigation strategies to ensure a checks and balance approach. Some innovative companies have put in place an AI oversight or ethics committee to develop and monitor their AI governance strategy. Such committees pools the expertise of different senior-level executives within the enterprise, such as Chief Technology Officer, Chief Data Officer along with the Chief Legal Officer and others, and can include strategies ranging from data strategy to people strategy, in line with company’s core values and priorities.

Pierre Gentin, Senior Partner & Chief Legal Officer, McKinsey

An internal policy that is aligned with emerging global regulations and industry standards is a foundational element of an effective approach to AI governance. The policy should identify high-risk AI use cases and establish a set of standards that teams involved in the development of such systems are expected to meet. Ensuring that the policy is effectively communicated and operationalized throughout the organization should be the focus of companies in the near term.

Ryan Taylor, Chief Revenue Officer & Chief Legal Officer, Palantir Technologies

The governance of genAI uses requires more than institutional oversight and controls in order to mitigate risk. It also requires the full digital infrastructure to help ensure reliability, accessibility, and safety in connecting AI-generated outputs to human-driven and human-impacting outcomes. Designing compliance and engendering trust and accountability in next generation AI applications starts with a whole-of-system approach to risk, from data lifecycle management to decision support to human interaction oversight.

'Perspective from the insurance industry'

Katja Roth Pellanda, Group General Counsel, Zurich Insurance Group

Innovative AI applications empower insurance companies to streamline underwriting processes, detect fraud more effectively, and tailor policies to individual needs, revolutionizing the industry's operations and customer service.

AI Governance Alliance

The Forum’s new AI Governance Alliance is a pioneering global, multi-stakeholder initiative to champion responsible global design and build transparent and inclusive AI systems. While government driven regulations will set the overarching framework, self-regulation through a risks mitigation strategy helps corporations respond to industry needs for responsible development and deployment in a rapidly evolving technological landscape. Such mitigation strategies to consider include considerations on adopting an open-source versus close-source AI model as well as own versus third-party vendors AI application.

Another strategy is moving testing, quality and performance evaluation to the earlier stages of the development. This can help an organization monitor the data set on which an AI model is trained and build-in transparency and accountability-based measures. The Forum’s CLO community is working with AI Governance Alliance in 2024 to shape a common vision of how business can effectively mitigate risks, self-regulate genAI and develop a successful AI strategy.

Loading...
Related topics:
Emerging TechnologiesForum InstitutionalStakeholder Capitalism
Share:
Contents
Risks and related mitigation approaches'Intellectual property''Lack of harmonized regulation''Compliance as a catalyst''Building trust''Enterprise-level policies''Perspective from the insurance industry'AI Governance Alliance

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum