Emerging Technologies

Why corporate integrity is key to shaping the use of AI

infographic of AI and its uses pictured over a city skyline

AI is rapidly evolving and governance is struggling to keep up. Image: Getty Images/iStockphoto

Inderpreet Sawhney
General Counsel and Chief Compliance Officer, Infosys
Houssam Al Wazzan
Lead, Partnering Against Corruption Initiative, World Economic Forum
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

  • AI's regulatory landscape is complex and inconsistent, with governance struggling to keep up with the rapidly-evolving technology.
  • Businesses have a key role to play by applying a strong code of ethics to ensure there is the necessary level of responsibility and accountability.
  • Ethical AI is essential for creating a future where technology serves humanity, and corporate integrity can lead the way by setting high standards.

Artificial intelligence's (AI) regulatory landscape is complex and inconsistent, with approaches ranging from voluntary industry codes of conduct to binding risk-based regulations at the national or supra-national level.

Despite global efforts to harmonize AI governance, such as those advanced by organizations like the UN, Organisation for Economic Co-operation and Development (OECD), the Group of 20, the Group of 7 and platforms like the World Economic Forum’s AI Governance Alliance, it is challenging to move forward with the rapidly-evolving technology when governance is struggling to keep up.

Have you read?

This means businesses have a key role to play by applying a strong code of ethics to ensure there is the necessary level of responsibility and accountability.

Companies can help support efforts by broadening corporate responsibility to integrate AI ethics for the common good, necessitating adjustments in their integrity ecosystems, corporate boards and institutional investors.

Reinforcing AI integrity ecosystems

Ensuring the responsible use of AI is a concern across industries both due to regulatory and liability risks, and a sense of social responsibility among industry leaders.

Indeed, corporate integrity now tends to extend beyond legal compliance to include the ethical deployment of AI systems, with many companies strengthening due diligence to manage AI risks by adopting ethical rules, guiding principles and internal guidelines.

Meanwhile, AI governance standards like ISO 42001 help firm up due diligence processes and initiatives like the OECD’s AI Incident Monitor, which has reported more thann 600 incidents since January 2024, help address issues and strengthen ethical AI deployment by providing valuable insights into the risks and harms of AI systems.

Yet in the United States alone, while 73% of C-suite executives believe that ethical AI guidelines are important, only 6% have developed them, according to a recent survey of 500 business leaders.

For those wanting to act, non-binding codes of corporate conduct, promoted by initiatives like the G7 Hiroshima Process, can help guide global business approaches to AI deployment. For instance, Unilever has implemented an artificial intelligence assurance process to vet each new AI application for effectiveness and ethics.

The governance of the AI integrity ecosystem within global companies is still evolving, with key roles for chief legal officers, chief technology officers and chief integrity, ethics, compliance and data officers becoming increasingly important.

This evolving structure is critical for managing the complex risks associated with AI. A combined approach helps prevent blind spots and ensures that AI development practices are thorough and secure.

For instance, Infosys has implemented a responsible AI ecosystem overseen by an AI Governance Council and a Responsible AI Office. This central structure ensures coordinated AI controls across various groups, including cyber defence, privacy, legal and quality.

To effectively mitigate risks, it’s essential that this governance framework is supported by sustained employee awareness programmes and robust AI management systems, along with comprehensive contractual arrangements for every AI application.

Meanwhile, Novartis developed an AI Risk and Compliance Management framework aligned with emerging regulations like the EU AI Act. This approach ensures that the company performs risk impact assessments on AI systems, safeguarding patient information while complying with reporting obligations.

The responsible use of AI requires both, clear ethical commitments and a comprehensive risk and compliance framework

—Klaus Moosmayer, Chief Ethics, Risk and Compliance Officer, Novartis

Klaus Moosmayer, Chief Ethics, Risk and Compliance Officer, Novartis

Boards and investors' role in enabling responsible development of AI

Governing boards can also help enable the responsible development and deployment of AI by expanding their approach to corporate integrity and responsibility.

But to do so effectively, they need to be knowledgeable about AI’s capabilities and risks and ensure AI ethics are integral to their strategic discussions, oversight responsibilities and risk assessment.

At Novartis, the CEO-chaired environmental, social and governance (ESG) committee approved an AI framework ensuring that AI ethics are part of its top-level strategic decisions. This integration at the highest levels of governance ensures ongoing accountability for AI systems.

Such initiatives are important as recent litigation underscores companies’ liabilities for artificial intelligence misuse, as seen in the case of Air Canada’s chatbot, which saw the airline being held liable for its chatbot giving bad advice to a passenger.

Institutional investors – as shareholders in both big-tech and non-tech companies – can also play an important role in steering the path forward towards responsible AI.

Indeed, shareholder activism for responsible AI is on the rise. For instance, one-fifth of Microsoft’s investors pressed the company to manage financial and reputational risks from AI-generated disinformation.

Several pension funds backed the proposal, including the Norwegian wealth fund. The $1.4 trillion wealth fund is indeed calling for smarter regulation of AI and has developed its own standards for companies it invests in setting its expectations for their responsible and ethical AI use.

Elsewhere, the US Securities and Exchange Commission (SEC) has prompted shareholder votes on AI use at companies like Apple and Disney, highlighting increasing ethical scrutiny.

This growing pressure from investors highlights the importance of ethical AI not just as a social responsibility but as a smart, long-term investment strategy. Responsible AI can contribute to commercial success and greater market share over time.

Supporting this view, a recent paper based on interviews with venture capital investors suggests that a focus on AI ethics could become a competitive advantage. While the global debate on AI governance often centres on mitigating risks, responsible AI should be recognized as a strategic asset.

Loading...

Governments can also influence market trends by incorporating ethical criteria in contracting AI solutions. This approach aligns with the stated purpose of the 2023 Executive Order on artificial intelligence issued by the US federal government, which underscores the importance of ethical AI in public procurement and beyond.

Together, these efforts from investors, companies and governments can help create a future where AI is developed and deployed in ways that benefit everyone, ensuring that ethical considerations are at the forefront of technological advancement.

Move towards ethical AI

The push towards ethical AI is not just a trend – it's a critical movement for shaping a human-centred digital future that benefits everyone. Ethical AI goes beyond just complying with regulations; it's about ensuring that technology serves the common good and aligns with our shared values. By embracing ethical AI, companies can play a pivotal role in creating a more equitable and responsible digital landscape.

Corporate integrity is key to this effort. When companies commit to ethical AI practices, they help build trust, protect individual rights and ensure that artificial intelligence systems are designed and used in ways that enhance human well-being. This is not just about avoiding harm; it's about proactively contributing to a better future for all.

To achieve this, corporations need to strengthen their AI integrity ecosystems, ensuring that oversight and governing boards take an active role in guiding ethical AI development. This includes not only meeting legal requirements but also prioritizing transparency, fairness and accountability in every aspect of AI deployment.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

Moreover, responsible artificial intelligence investing is crucial. By supporting companies that prioritize ethical AI, investors can help drive the broader adoption of practices that promote the common good. The impact of these efforts extends far beyond individual companies – it fosters a global environment where AI is developed and used in ways that are aligned with human values and societal needs.

In summary, ethical AI is essential for creating a future where technology serves humanity, and corporate integrity can lead the way by setting high standards and encouraging responsible practices across the board. This is not just a corporate responsibility but a collective opportunity to shape a digital world that works for everyone.

Special thanks to Carlos Santiso, Head of Division, Digital, Innovative and Open Government, OECD, and Klaus Moosmayer, Chief Ethics, Risk and Compliance Officer, Novartis, from the Forum’s Global Future Council on Good Governance for their contributions to this piece.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Emerging TechnologiesForum Institutional
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Will AI make it easier to get the productivity growth we want?

Carl-Benedikt Frey, Era Dabla-Norris, Rob Hornby and Laura D'Andrea Tyson

October 15, 2024

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum