Emerging Technologies

Why every investor should embrace Responsible AI

Human and robot handshake with empty space on blue background, artificial intelligence, AI. Responsible AI.

As the world of AI expands and becomes more complicated, Responsible AI usage is becoming an important pre-condition for investment in any firm. Image: Getty Images/iStockphoto

Judy Wade
Managing Director, Head of Strategy Execution and Relationship Management, CPP Investments
Chris Gillam
Research Fellow, CPP Investments
  • As investors grapple with AI's rapid expansion, Responsible AI is becoming an increasingly important investment consideration.
  • Responsible AI means AI development and deployment that is valid and reliable, safe, fair, secure and resilient, accountable and transparent, explainable and interpretable.
  • The Responsible AI Playbook for Investors gives an insight into how investors can get to grips with Responsible AI.

Artificial intelligence (AI), inclusive of machine learning and generative AI, is transforming the investment landscape. From factory floors to financial institutions, this fast-moving technology is being adopted across the economy — and creating unprecedented opportunities and risks.

To succeed in this new environment, investors must keep pace. That means sharpening not only their understanding of how AI works but ensuring significant risks are mitigated, particularly those associated with generative AI. Investors understand the importance of adopting principles and policies that ensure AI is developed and deployed in a manner that is valid and reliable, safe, fair, secure and resilient, accountable and transparent, explainable and interpretable — what’s known as Responsible AI.

However, for many companies and investors it is hard to know where to start, particularly given generative AI’s seemingly unparalleled speed of development and a rapidly changing regulatory and commercial environment.

The Responsible AI Playbook for Investors, a collaboration between the World Economic Forum and CPP Investments Insights Institute, aims to bridge this gap. It argues investors can and should exercise the influence of their capital to promote Responsible AI in their portfolios of direct investment, in their work with investment partners and in the ecosystem at large. And it offers practical tools and approaches to help them do it.

Have you read?

Are boards and investors ready for AI?

Though AI has been advancing behind the scenes for decades, the excitement surrounding generative AI has sparked a more recent rush to adoption. In last month’s McKinsey Global Survey on AI, 65% of respondents said their organizations were regularly using generative AI, nearly double the percentage from 10 months ago. Three quarters of the survey’s respondents predicted generative AI will lead to significant or disruptive change in their industries in the years ahead.

Yet directors are struggling with oversight. Some 36% of directors in the 2024 National Association of Corporate Directors Governance Outlook identified AI as one of the most challenging areas to govern. Only 15% of large US companies disclosed any board oversight of the technology.

These statistics should raise the eyebrows of investors, who depend on boards to be responsible for overall corporate governance.

Responsible AI: A forethought, not an afterthought

Appropriate governance is critical to ensuring boards and management balance the competitive deployment of AI against its potential risks. Responsible AI is a powerful tool for achieving that balance. By setting clear expectations for boards based on Responsible AI principles, investors can ensure foundational concerns are addressed.

Under Responsible AI, AI technologies are developed and used in ways that avoid social risks and respect ethical standards and legal requirements, reducing potential liabilities. For example, a proactive Responsible AI framework can prevent costly lawsuits and fines resulting from failures to comply with emerging global regulations like the European Union's AI Act.

Moreover, robust AI governance can safeguard against technological failures. A study by Boston Consulting Group (BCG) found companies that prioritize scaling their Responsible AI programmes over simply scaling their AI capabilities experience nearly 30% fewer AI failures — or instances when AI systems function in an unintended way that impacts the company, employees, customers or society.

Boosting engagement, trust and value

Responsible AI’s advantages extend beyond this. AI systems designed with responsibility in mind can significantly enhance customer trust and brand reputation. Research from the Economist Intelligence Unit suggests that when customers know a company uses AI ethically, they are more likely to engage with the brand. They’re also more likely to become repeat customers, driving both top-line growth and sustained profitability.

Finally, research from Bain & Company finds that firms with a comprehensive, responsible approach to AI earn twice as much profit from their AI efforts. Leaders in these firms aren’t afraid of possible risks. And they gain value from AI by more rapidly implementing use cases and adopting sophisticated applications.

Through proactive Responsible AI engagement and leadership, investors can drive the responsible development and deployment of AI technologies, ensuring that these innovations contribute positively to corporate performance and market dynamics.

Where can investors begin? 3 quick steps

Step 1: Develop Responsible AI commitments and apply its principles and practices to internal operations.

Investors looking to integrate Responsible AI across their portfolios should become knowledgeable on AI and Responsible AI and apply it to their own operations. This includes defining their own Responsible AI principles and priorities, including what they will not invest in.

Step 2: Conduct Responsible AI due diligence on the portfolio.

Investors should perform proper due diligence to assess how companies and investment partners are positioned to meet Responsible AI principles.

Step 3: Engage with companies, external managers, and the broader ecosystem.

Working with companies, external managers and other players can extend investors’ influence and help them maximize the value derived from their AI-enabled.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

3 strategies for using generative AI to responsibly extract data insights

Igor Jablokov and Cosima Piepenbrock

October 29, 2024

AI could empower and proliferate social engineering cyberattacks

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum