Emerging Technologies

The new AI imperative is about balancing innovation and security

Artificial intelligence AI deep learning computer program technology. Cyber leaders must effectively communicate AI-security risks to make the case for investment in mitigating them.

Cyber leaders must effectively communicate AI-security risks to make the case for investment in mitigating them. Image: Getty Images/iStockphoto

Louise Axon
Research Fellow, Global Cyber Security Capacity Centre, University of Oxford
Joanna Bouckaert
Community Lead, Centre for Cybersecurity, World Economic Forum
This article is part of: Centre for Cybersecurity
  • AI is a hot topic in boardrooms and C-suites around the world.
  • Business leaders are often unaware of the cyber risks their organizations face when adopting AI systems.
  • To maximize the benefit of adopting or using AI technologies, business leaders need to encourage robust systems of cyber risk governance.

As AI dominates board discussions on global investment decisions, business leaders are often unaware of the cyber and digital risks their organizations face when adopting AI systems, as well as the strategies they should adopt to mitigate these risks.

To stay competitive, organizations must encourage innovation, including the use of AI as a business enabler. However, focusing solely on AI's opportunities without addressing the associated cyber risks leaves organizations vulnerable. It is essential to consider the full cost of AI solutions, including security-related expenses. Rather than stifling innovation, cyber leaders should emphasize the importance of integrating security by design, implementing appropriate suites of cybersecurity controls and accounting for these costs to preserve business value.

A comprehensive approach to understanding cyber risk exposure and implementing appropriate controls is, therefore, crucial for realizing the full benefits of AI.

Have you read?

The risks and costs of AI security

AI technologies broaden an organization’s attack surface, introducing new risks such as training data poisoning and prompt injection. Additionally, those technologies introduce new avenues for existing types of risk — the scale of the training and test databases presents risks of large-scale data leakage, for example

Another consideration is that on top of the traditional cybersecurity properties — confidentiality, integrity and availability — evaluating AI-driven cyber risks may require taking into account factors such as model output reliability and explainability.

There is already an understanding in the business community of the types of harm that can result from cyberattacks, such as physical, financial or reputational damage. AI does not necessarily introduce new types of harm to an organization compared to existing cyber-harm taxonomies, but instead amplifies them, increasing the likelihood and/or severity of these existing harms.

The contextual factors of different businesses influence and drive their cyber risk, and need to be taken into consideration when developing AI risk-mitigation strategies. The relationship between the AI technologies and business processes clearly influences risk: organizations using AI for critical business processes or tied to critical infrastructure face higher cyber risks due to the potential impact of disruptions. The level of AI system autonomy also impacts risks; while human checks and balances can mitigate some risks, scalable solutions for growing AI implementations will be necessary. High-autonomy AI systems could even become threat actors themselves. Additionally, the origin and jurisdiction of AI technology can influence risk assessments.

Attributes of the business could also impact on the risk and the opportunities for mitigation. These include its size, its appetite for early technology adoption, its position in the AI supply chain (whether it creates and potentially supplies AI models, or is a consumer of products and services) and the national context it sits in (including the existence of relevant regulation and of a cybersecurity marketplace).

Making informed, risk-based decisions about AI adoption requires a clear understanding of its benefits versus potential harms. Currently, there is a lack of clarity around AI's true benefits, with use cases still developing and often driven by evangelism, complicating accurate cost-benefit analysis.

A holistic approach to controls is key

The marketplace for tools to secure AI adoption is expanding, but many controls are still either unavailable or challenging to implement. Most AI-security tools are use-case specific, reflecting the diverse functionalities, training and outputs of AI models, making it difficult to establish a universal baseline of controls.

Organizations need a diverse array of AI-security controls, including tools for explainability and traceability of outputs, security monitoring for AI models, red-teaming guidelines, recovery tools for compromised systems, decommissioning processes and rollback procedures (akin to a 'kill switch'). AI systems may also require more human oversight than traditional software, given the unpredictability of AI outputs.

As in other areas of cybersecurity, divergence in regulatory requirements across jurisdictions is already creating compliance challenges for multi-jurisdictional organizations. There is not yet standardization regarding the baseline of controls required to mitigate the cyber risks related to AI adoption.

Delivering effective risk communication

Cyber leaders must focus on effectively communicating AI-security risks to the business as it will be key to supporting their investment requests. Business leaders need to understand how these risks link to business priorities. Therefore, developing a toolkit to support the understanding of an organization’s risk exposure is essential, accompanied by clear guidance on communicating this information to the relevant audiences.

The current uptake of AI is a great opportunity for cyber leaders to help organizations derive business value from the technology while safeguarding their operations against related cyber threats.

The World Economic Forum’s Centre for Cybersecurity is teaming up with the Global Cyber Security Capacity Centre, University of Oxford, in leading the AI & Cyber: Balancing Risks and Rewards initiative to steer global leaders’ strategies and decision-making on cyber risks and opportunities in the context of AI adoption. This work is developed in collaboration with the AI Governance Alliance — launched in June 2023 — and aims to provide guidance on the responsible design, development and deployment of artificial intelligence systems. Read more on its work here.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Why is human-first design essential to the future of the internet?

Matt Price and Anna Schilling

November 20, 2024

We asked 4 tech strategy leaders how they're promoting accountability and oversight. Here's what they said

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum