Deepfakes proved a different threat than expected. Here's how to defend against them
Deepfakes are here to stay, and we need to learn how to defend against them. Image: Freepik
- Deepfakes were widely anticipated to disrupt global elections and create a misinformation and disinformation apocalypse.
- Deepfakes failed to turn the tide in any candidate's favour, although their ineffectiveness doesn't mean they're harmless.
- As AI technology improves, organizations must be vigilant and maintain awareness to protect their people and systems.
What is the biggest threat from deepfakes? If you had asked people a year ago, many would have said their ability to disrupt global elections and create a misinformation and disinformation apocalypse, but that’s not what happened.
During election cycles in 2024, we saw memes, propaganda and poor quality ‘AI slop’ —none of which turned the tide in any candidate’s favour.
But deepfakes are still with us, and their ineffectiveness in affecting elections doesn’t mean they are harmless.
Have you read?
Many got deepfakes wrong
I’ve been writing about the topic of deepfakes since 2020, warning people that deepfakes were a different threat than we expected and not a misinformation/disinformation threat that would affect elections.
A recent report from Meta confirms this, claiming that less than 1% of all fact-checked misinformation during the 2024 election cycles was AI content.
Despite major elections across the globe completing without AI incidents, including the world’s biggest election in India, experts continued trumpeting the dangers until the 2024 US presidential election.
Many overestimated the impact of deepfakes because they couldn’t fathom how a highly convincing image or video wouldn’t fool people. After all, we had images like the Pope in a Puffer Jacket and Katy Perry’s dress at the Met Gala, which did indeed fool people. However, these images had no stakes and didn’t conflict with people’s biases.
Elections can be highly polarizing and deeply divisive. Information consumption and sharing under these circumstances tend to align with people’s biases. In many cases, the truth itself doesn’t change people’s minds in these scenarios.
Many ignored the fact that fake information was a large part of the internet long before the emergence of generative AI (GenAI). Despite this, we haven’t entered a misinformation/disinformation apocalypse. Misinformation has become a form of entertainment, with people sharing memes and propaganda supporting their perspective to signal solidarity and irritate the other side.
The real risks of deepfakes
As the technology powering deepfakes has become both more capable and accessible, two immediate categories of harm emerge: harassment and social engineering.
Deepfakes can be used to target and harass individuals. The most extreme form of harassment is the creation of non-consensual pornography. These images can be used outright or used as content for blackmail and sextortion scams.
There are wider societal impacts from this form of harassment, and new laws are being proposed to specifically address this form of harassment. One example is the TAKE IT DOWN Act in the United States, which was passed unanimously by the Senate.
The second set of risks involves social engineering attacks. Deepfakes can be used to exploit both people and technology. For example, an attacker can use AI to clone a family member’s voice and use it to try and scam a relative out of money. This type of scam is on the rise, prompting organizations such as the US Federal Trade Commission to issue a consumer alert.
Beyond scams, attackers can target individuals in an organization with more advanced social engineering attacks. These attacks can use cloned voices or even more elaborate attacks incorporating video deepfakes.
In February 2024, it was reported that a finance worker for a multinational firm in Hong Kong was tricked into paying $25 million based on a Zoom meeting in which all of the participants, including the company’s chief financial officer, were all deepfakes.
Deepfakes can also be generated to attack systems. Typically, they will attack systems that provide authentication or verification. Deepfakes used in this way can expose brittle verification strategies, such as a bank asking for a simple voice print identification.
However, while deepfake technology can enhance social engineering attacks, out of numerous incident response engagements yearly by the Kudelski Security IR team, none have involved advanced deepfake techniques to date. Basic social engineering methods continue to dominate.
How to defend against deepfakes
The accessibility of tools and low friction to create content means deepfakes are here to stay. This, unfortunately, puts much of the onus on individuals to defend themselves.
For defence against scams, discuss this scenario with your family and make them aware of the dangers. Families should set up a shared ‘secret’. This secret word or phrase can be requested to validate the family member is who they claim to be.
For organizations and workplaces, responsibility is shared between the employees and the technology implemented. User education and awareness programmes need to be augmented with examples of deepfake attacks so users are aware of the scenarios and potential damage.
Deepfake detection technology is an ongoing area of research. Despite this, certain steps can be taken to lower the risk and raise the bar for attackers.
Strong authentication and authorization should be in place. Identify areas of weak authentication or verification, such as a simple voiceprint identification or simple flat image document checks, and augment them with additional layers of security.
Voice-based authentication can be augmented with an additional secret that an attacker wouldn’t know. For facial-based verification, liveness checks can be implemented, requiring a person’s head to be rotated and turned.
The same concept can be implemented for documents such as photo IDs, requiring them to be rotated and turned. None of these techniques are foolproof, and attackers are already exploiting them, which means organizations will have to layer security measures to make things more difficult for attackers.
Some of the issues with deepfakes are out of our control. It may take government intervention to create laws against certain usage of the technology, such as non-consensual pornography. However, if people are being harassed, they can work with technology platforms like Meta, X, and others to report the behaviour.
Most of all, organizations need to be prepared to evolve strategies and adapt. As AI technology improves and new techniques emerge, organizations must be vigilant and maintain awareness of attacker techniques to protect their people and systems.
How is the Forum tackling global cybersecurity challenges?
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Disinformation
Related topics:
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on CybersecuritySee all
Robert Thomson
January 7, 2025