Emerging Technologies

Investing in AI? How can you help ensure it's safe?

Image of a phone interface with generative artificial intelligence apps: Responsible AI deployment translates to a competitive advantage.

Responsible AI deployment translates to a competitive advantage. Image: Unsplash/Solen Feyissa

Robin Pomeroy
Podcast Editor, World Economic Forum
Sophia Akram
Writer, Forum Agenda
Loading...
  • As investors pour money into companies developing or deploying artificial intelligence, what steps should they take to ensure that artificial intelligence (AI) is safe and responsible?
  • In this episode of Radio Davos, we look at what questions investors should ask about AI by speaking to the managing director at a billion-dollar investment fund who has helped write a “playbook for investors” to help them navigate AI.
  • Listen to the podcast here, on any podcast app via this link or YouTube.

AI is going to, it already has enormous benefits, but there are significant risks. And if you want to take advantage of those enormous benefits and mitigate those risks, then you really have to deploy this responsibly. And it is not as difficult as you may think it is to do so.”

So says Judy Wade, managing director and head of strategy execution and relationship management at CPP Investments (the ​​Canada Pension Plan Investment Board), who co-authored The Responsible AI Playbook for Investors, a report published by the World Economic Forum.

With plenty of money being poured into the technology, Wade explains in this week’s Radio Davos why investors, alongside policymakers and big tech, have a role to play in ensuring its responsible deployment. What’s more, it’s in their best interests.

Here are some of the highlights of Wade’s interview on Radio Davos.

Have you read?

Identifying risks

Judy Wade: [There] is a lot of focus on the large language developers. So OpenAI and Meta. But there hasn’t been a lot of focus on the actual deployment of AI in the real economy. And that’s, quite frankly, where a lot of both the benefits and the potential risks are going to occur.

And as investors, we really felt that there was an outsized role for investors to play in helping accelerate responsible AI, both of those together, and that with WEF we had a fabulous platform to learn and develop what we thought responsible AI should look like for investors.

And that means adoption of AI, again within the real economy, that’s helpful, harmless and honest. I think those are three kind of easy pillars you can hang a lot underneath. And while we think that there are these extraordinary benefits of deploying AI in the real economy, doing so without appropriate guardrails creates both significant risk but also maximises benefits.

Loading...

What’s in it for investors

Judy Wade: We have to be thinking about what we want this output to look like long-term and it’s very clear from much of the research that if you’re looking at long-term returns, you need to be looking at sustainable returns. You need to be looking at products that, for example, if you think where one of the biggest benefits of genAI is, is in marketing.

If your data is biased and you’re therefore excluding potentially a whole set of customers from your target customer set, that’s a long-term consequence for you. That means you’ve cut out of your total available market a whole set of customers that could be terrific.

So I think for us as investors, we’re looking for companies and [general partners] that are thinking about sustainable – I mean, I’m using that word, I think broadly – returns. And so that’s why it matters to us. And I think it was quite resonant for all of the investors that participated in this playbook that just rushing in [...] without thinking about where the values are and where the risks are, would actually also potentially affect near-term returns, not just long-term returns.

Beyond box ticking

Judy Wade: [Our] role as an investor is not to make policy or regulatory recommendations but we really think that they should be run side by side. Because if you think about it, one, the regulations are still quite fluid and they are going to evolve. I think there are 27 states here in the US developing regulations. You obviously have the EU and so it’s absolutely critical that we are not ahead of but in line with those.

But there’s much more to responsible AI than checking a legal box. Back to risk mitigation, implementing responsible AI can help companies identify and mitigate potential risks. I come back to competitive advantage. Doing so responsibly with data that isn’t biased, with the right models, is actually a competitive advantage. Innovation is really critical but even outside of the regulatory environment, you want to have that innovation done with safeguards so you’re not releasing models that are flawed.

And from a brand and reputation perspective, having a set of principles, being clear about those with your consumers is actually, again, a competitive advantage.

Loading...

Check out all our podcasts on wef.ch/podcasts:

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Why is human-first design essential to the future of the internet?

Matt Price and Anna Schilling

November 20, 2024

We asked 4 tech strategy leaders how they're promoting accountability and oversight. Here's what they said

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum