AI is the latest weapon cybercriminals are exploiting
Cyberattacks are becoming more potent thanks to AI - but it's helping those defending against them, too Image: REUTERS/Ronen Zvulun
In March 2019, the CEO of a large energy firm sanctioned the urgent transfer of €220,000 to what he believed to be the account of a new Eastern European supplier after a call he believed to be with the CEO of his parent company. Within hours, the money had passed through a network of accounts in Latin America to suspected criminals who had used artificial intelligence (AI) to convincingly mimic the voice of the CEO.
With one AI-enabled conversation, criminals had bypassed layers of cybersecurity controls. Their success illustrates how certain use of powerful developing technologies such as AI, 5G, biometrics and new encryption technologies will change the landscape of cybercrime for both attackers and defenders. Organizations that must defend against cyberattacks, as well as their partners in both the public and private sectors, need to work together in public-private partnerships to grasp how new technologies will change the risk and threat landscape, and to prepare a collective, adequate response.
That's why Equifax and the World Economic Forum convened the inaugural Future Series: Cybercrime 2025. Global cybersecurity experts from academia, government, law enforcement, and the private sector are meeting in the US city of Atlanta to explore how AI could change the dynamics of cybersecurity in the near future. This, the first workshop of the Future Series: Cybercrime 2025 initiative, will explore the cybersecurity threat and defense implications of the key technologies of the Fourth Industrial Revolution.
Cybercriminals are adept at adopting any techniques or innovations that give them an edge over cybersecurity defenses. Early case studies and research indicate where defenders are already seeing the earliest impact: defending against ‘strong’ AI – where criminals use systems that operate, think and act as humans – and against ‘weak’ or ‘narrow’ AI – where systems are modelled on human behavior to execute specific tasks. Given its potential uses, AI is expected to drive systemic changes in the cybersecurity landscape, and will impact four key challenges in cybersecurity in the near future.
Challenge 1: Increasing sophistication of attackers
Attackers of varying degrees of sophistication – from social activists to nation states – invest their efforts in targeting opportunities that carry the expectation of highest return on investment. Organizations can drive risk-based control investments to reduce their appeal to attackers. As organizations mature their cybersecurity programmes, they become less valuable targets.
AI has the potential to accelerate the volume of attacks as automation of tasks and enhancement of malicious services further reduce barriers to entry and execution of attacks. AI-enabled technology may also enhance attackers’ abilities to preserve both their anonymity and distance from their victims in an environment where attributing and investigating crimes is already challenging.
Challenge 2: Asymmetry
As defenders, we must be successful at stopping attacks 100% of the time, whereas attackers only need to be successful once. Organizations must focus on building the right capabilities, and a team that can implement processes and technology that reduce this asymmetry.
While AI and automation are reducing variability and cost, improving scale and limiting errors, attackers may also use AI to tip the balance. Criminals will be able to automate the most resource-intensive elements of their attacks and bypass the controls deployed against them. ‘Narrow’ AI predictions for the near future indicate that AI-enabled vulnerability scanners that can speed up discovery and exploitation of vulnerabilities by attackers might challenge current vulnerability management and cybersecurity operations capabilities.
Challenge 3: Increasing the attack surface / Digitalizing operations
As organizations continue to grow, so do the size and complexity of their technology and data estates, meaning attackers have more surfaces to explore and exploit. To stay ahead of attackers, organizations can deploy advanced technologies such as AI and automation to help create defensible ‘choke points’ rather than spreading efforts equally across the entire environment.
Additionally, the use of AI in business processes has the potential to change the nature of cyber-risks and assets that need to be defended. Increasing reliance on AI-enabled technology may create new opportunities for attackers to interfere with critical business processes, affecting both internal decision-making and relationships with customers.
Challenge 4: Balancing risk and operational enablement
Organizations aim to run their operations successfully and securely. One easy reaction to changes to the risk and threat landscape is to adopt a heavy-handed security culture that ultimately reduces competitiveness and suppresses workforce morale. Instead, security teams can use a risk-based approach, by establishing governance processes and materiality thresholds, informing operational leaders of their cybersecurity posture, and identifying initiatives to continuously improve it.
Additional operational enablement can be achieved by using technologies like AI to improve how operational and technology teams engage with security. For example, using technology available today, the time required to complete routine security processes can be reduced significantly by using AI to automate resource- or time-intensive aspects of these processes. For operational teams, improvements in security process efficiency reduce the friction associated with following security requirements. Developments in AI technology are expected to unlock more opportunities to improve cybersecurity operations and support the balance of risk and return.
In the ever-changing cyberthreat landscape, organizations will have to defend against increasingly complex and interconnected risks. The key technologies of the Fourth Industrial Revolution, including AI, have the potential to effect a series of systemic shifts in that landscape. In the near future, AI will shape the risks to organizations, and through the Future Series: Cybercrime 2025 program, the World Economic Forum and its partners seek to identify the effective actions needed to mitigate and overcome these risks.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Cybercrime
Related topics:
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on CybersecuritySee all
Rob Rashotte
October 30, 2024