How Chief Information Security Officers can help organizations unlock the potential of generative AI
The Chief Information Security Officer can help companies guard against risks while unlocking the transformative potential of generative AI. Image: Unsplash/Charles Deluvio
Sabrina Feng
Chief Risk Officer, Technology, Cyber and Resilience, London Stock Exchange Group (LSEG)- Generative AI offers organizations enormous potential but comes with risks, too.
- Chief Information Security Officers will play a vital role in shaping the concrete business case for the technology, articulating its risks and challenges, and defining the key performance indicators that will drive targeted solutions.
- Generative AI provides a significant opportunity for CISOs to position themselves as business enablers throughout the organization.
As organizations across the world are racing to adopt generative AI at an unprecedented pace, it is crucial to evaluate the technology’s risks to the enterprise. The role of the Chief Information Security Officer (CISO) is key, helping unlock the transformative potential of generative AI while ensuring protection against cyber risks.
The newly developed generative AI systems in which organizations ingest, store and process their data are vulnerable to attacks, creating an avenue for malicious actors to steal substantial amounts of data, manipulate the output or use it to attack connected systems. Those actors may even employ their own generative AI systems, reducing the cost associated with devising and launching attacks, and increasing the breadth, depth and sophistication of those attacks. It is imperative for organizations to anticipate these risks and be prepared to respond swiftly.
How is the World Economic Forum creating guardrails for Artificial Intelligence?
CISOs need to actively contribute to key activities across their organizations to become business enablers while ensuring the secure adoption of generative AI.
Engaging in the governance of generative AI
Many organizations have already established a multidisciplinary centre of excellence for generative AI, encompassing teams such as legal, cybersecurity, compliance, technology, risk management and human resources. These centres typically manifest as steering committees or working groups that convene on a regular basis, actively fostering collaboration among various departments and assuming responsibility for several aspects, including:
- Governance of generative AI: This encompasses tasks such as formulating new policies, updating the IT code of conduct, defining fundamental responsible AI principles, and articulating the value of generative AI for the organization. It also entails the creation of a robust risk management framework, conducting associated risk assessments, and implementing necessary controls to manage these risks.
- Education: This involves managing internal awareness campaigns, building an AI-literate workforce and designing communications regarding the use of generative AI tools.
- Resource allocation for projects: This encompasses determining staffing needs for new projects, ensuring the technology is developed with diverse perspectives in mind.
- Alignment: This entails ensuring that local and group technology and product teams are informed about the new principles that are developed.
It is key that the CISO engages in these established generative AI-focused centres to substantially contribute to the ongoing discussions across all major aspects of the technology.
Being a catalyst for innovation
It is vital for organizations to maintain an entrepreneurial spirit in their approach to generative AI. Leaders should be inspired by early adopters and evolve based on the lessons learned from deploying the technology across their own operations.
This means embedding flexibility into the governance of the technology to allow for adaptation in a safe manner. Blanket restricting access to generative AI technologies or imposing overly stringent policies is counterproductive, especially considering the high adoption rates across populations – 45% in the US and 73% in India – suggesting employees are already utilizing these tools.
New technologies will continually emerge, so organizations should prioritize developing a governance structure that accounts for those evolutions. The CISO can and should play a pivotal role in this endeavour.
Promoting training and awareness
Conducting comprehensive training and awareness initiatives that educate employees is crucial to ensure secure usage. By fostering a better understanding of issues such as biases in algorithms, privacy concerns and security vulnerabilities of AI systems, organizations can empower their workforce to make informed decisions when deploying or interacting with AI technologies.
It is also vital to provide guidance and clearly outline the obligations associated with the use of publicly accessible generative AI tools. Moreover, training and awareness efforts should be aligned with the ongoing monitoring of generative AI tool usage, facilitating faster responses to instances of misuse and associated threats.
When developing training programmes, it is paramount to tailor them to different target audiences to meet their needs. Cybersecurity teams, for example, should be equipped with a deep understanding of how generative AI models operate, enabling them to conduct accurate security reviews and enhance their credibility when advising IT teams.
Contributing to risk management activities
Although AI is not a novel technology for organizations, its democratization for diverse purposes presents new challenges. In the past, organizations often centralized the utilization of AI technologies, making it easier for the cybersecurity team to track and control them. However, the widespread adoption of generative AI across organizations has made this task more complex. A critical challenge for organizations is to rapidly expand and streamline governance and risk management in tandem with the growing use of generative AI technologies throughout the organization.
To achieve scalability in governance and risk management, it is essential to establish a risk and control framework that aligns with the specific use cases and service models of the technology. Organizations should also clearly define their risk tolerance and make well-informed decisions, considering associated trade-offs.
Maintaining a balance among different risk types (model-based, data-related and legal) is crucial. The CISO offers an important perspective to get a well-rounded approach to generative AI risk management. Given that the risks posed by generative AI systems transcend various risk disciplines, it is imperative to adopt an interdisciplinary perspective in analyzing and addressing these risks.
The role of the CISO
The CISO assumes a critical role in shaping the concrete business case for generative AI, articulating its risks and challenges, and defining the key performance indicators (KPIs) that will drive targeted solutions. The CISO should strike a balance between harnessing the advantages of generative AI for the organization and its customers while ensuring the proper protection of systems. The responsibilities of the CISO with regard to generative AI technologies could encompass:
- Gaining understanding of the organization's business objectives: This will allow the CISO to anticipate emerging risks and develop corresponding controls.
- Establishment of governance structures: Creating governance structures for data usage and privacy, and giving users ownership and responsibility in their interaction with generative AI technologies.
- Advice on the integration of generative AI into products: Collaborating with product management, risk and legal teams to ensure secure generative AI is an integral part of the business cases from the outset.
- Involvement in internal AI initiatives: Active participation in the design and monitoring of AI initiatives, with a focus on enhancing processes with security controls.
- Education and communication: Informing and educating the business community on the secure use of AI systems and clearly delineating instances where generative AI should not be employed.
- Exploring opportunities: Identifying and adopting opportunities presented by generative AI systems for the cybersecurity team, such as facilitating Security Operations Centres (SOCs) activities on trend detection, attack response and incident handling.
Generative AI provides a significant opportunity for CISOs to position themselves as business enablers throughout the organization. While many have taken initial steps, CISOs need to become key contributors to internal AI governance, risk management, and training and awareness initiatives to successfully contribute to the secure transformation of their organizations.
The World Economic Forum launched the AI Governance Alliance in June 2023. It aims to provide guidance on the responsible design, development and deployment of artificial intelligence systems. Read more on its work here.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Emerging TechnologiesSee all
Michele Mosca and Donna Dodson
December 20, 2024