Emerging Technologies

AI: These are the biggest risks to businesses and how to manage them

Over half of the CROs understand how regulating AI development affects their company.

Over half of the CROs understand how regulating AI development affects their company. Image: Unsplash/Sean Pollock

Douglas Broom
Senior Writer, Forum Agenda

Listen to the article

  • Artificial intelligence has the potential to bring “significant benefits” to sectors like agriculture, education and healthcare.
  • But a new report by the World Economic Forum highlights the potential dangers of AI to business and society.
  • Risk professionals say better regulation is needed to allow us to reap the benefits safely.

Artificial intelligence (AI) promises to deliver significant benefits to businesses and society – but it also has the potential to cause significant harm if we fail to understand the risks that the technology poses.

That’s the view of Chief Risk Officers (CROs) from major corporations and international organizations who participated in the World Economic Forum's Global Risks Outlook Survey.

The survey is detailed in the Forum’s mid-year Chief Risk Officers Outlook, which warns that risk management is not keeping up with the rapid advances in AI technologies.

Three-quarters of the CROs surveyed said that the use of AI poses a reputational risk to their organization, while nine out of ten said more needed to be done to regulate the development and use of AI.

Almost half were in favour of pausing or slowing down the development of new AI technologies until the risks are better understood. AI “creates a complex and uncertain environment in which organizations must operate,” the report says.

“Recent months have seen a sharp increase in discussion of technology-related risks, particularly in the context of a surge of interest in the exponential advances being made by generative AI technologies.”

Significant benefits and significant harms

Although AI technologies have the potential to provide “significant benefits” in sectors like agriculture, education and healthcare, “they also have the power to cause significant harms,” the report says.

Among the biggest risks identified by the CROs is malicious use of AI. Because they are easy to use, generative AI technologies can be open to abuse by people seeking to spread misinformation, facilitate cyber attacks or access sensitive personal data.

What makes AI a serious risk is its “opaque inner workings”. No one, the report says, fully understands how AI content is created. This increases the risks highlighted by the CROs of inadvertent sharing of personal data and bias in decision-making based on AI algorithms.

The risks of the rapid rise of AI – this is what CROs think.
The risks of the rapid rise of AI – this is what CROs think. Image: World Economic Forum

The lack of clarity about how AI works also makes it hard to anticipate future risks, the report adds, but the CROs say the areas of business most at risk from AI are operations, business models and strategies.

All of those surveyed agreed that the development of AI was outpacing their ability to manage its potential ethical and societal risks – and 43% said the development of new AI technologies should be paused or slowed until its potential impact was better understood.

Regulating AI development

Over half of the CROs said they understood how regulation might affect their organization and 90% said that efforts to regulate the development of AI needed to be accelerated.

More than half are planning to conduct an audit of the AI already in use in their organizations to assess its safety, legality and “ethical soundness”, although some said senior management were unwilling to view AI as a business risk.

We need to move faster to regulate AI, say CROs.
We need to move faster to regulate AI, say CROs. Image: World Economic Forum

Peter Giger, Group Chief Risk Officer at Zurich Insurance Group and one of the CROs who contributed to the Outlook report, says it’s wrong to ignore the risks, but businesses need to take a wider, longer-term approach.

Over half of the CROs said they understood how regulation might affect their organization and 90% said that efforts to regulate the development of AI needed to be accelerated.

“Too much and too-narrow a focus on the risks that are likely to dominate over the next six months or so can lead us into being easily distracted from dealing with the big risks that will determine the future,” he said.

“AI offers a good example, favouring the long-term-thinking approach. Will AI disrupt everyone’s lives today? Probably not. It’s, for many of us, not an immediate threat. But ignoring the implications and trends that AI is going to bring with it over time would be a massive mistake.”

Discover

How is the World Economic Forum ensuring the responsible use of technology?

Guidance for responsible AI use

In June, the Forum published recommendations for the responsible development of AI which urged developers to be more open and transparent and to use more precise and shared terminology.

The guidance also called for the tech industry to do more to increase public understanding of AI capabilities and limitations and to build trust by taking greater account of society’s concerns and user feedback.

Learn more about the Fourm's AI Gove­rnan­ce Alli­ance here.

More on the Global Risks Outlook Survey

The Global Risks Outlook Survey was conducted among the Forum’s CRO community, which includes risk professionals from a wide range of multinational companies covering fields including technology, financial services, healthcare, professional services and industrial manufacturing.

Aside from AI, respondents identified macroeconomic conditions, pricing and supply disruptions of key raw materials, armed conflicts and regulatory changes as top concerns for organizations.

Have you read?
Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Education, Gender and Work

Share:
The Big Picture
Explore and monitor how Education, Gender and Work is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How AI could expand and improve access to mental health treatment

Hailey Fowler and John Lester

October 31, 2024

3 strategies for using generative AI to responsibly extract data insights

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum