How do you spot a deepfake? This is what the experts say
Deepfake technology is being used by some to commit crimes. Image: REUTERS
- Deepfakes – realistic, but fake, videos, images or audio clips of people – are being used to commit financial and sexual crimes.
- Experts are sharing tips on how to spot a deepfake, but they are not foolproof.
- The World Economic Forum’s Typology of Online Harms aims to provide a common, universal language to advance digital safety.
When Taylor Swift is giving away cookware or Donald Trump is dancing with Elon Musk, our deepfake antennae start to twitch.
But deepfakes – in which AI is used to create hyper-realistic, but fake, videos, images or audio clips of people – are not always light-hearted or easy to spot.
In late 2023, deepfake audio recordings derailed a top candidate in Slovakia’s elections. In early 2024, an employee in Hong Kong was duped into paying out $25 million to fraudsters after they faked a video conference call with him.
How is the World Economic Forum creating guardrails for Artificial Intelligence?
In a record year for elections around the world, political candidates are understandably nervous about the situation. And they’re right to be – new research reveals that deepfakes that impersonate politicians and celebrities “to influence public opinion” are the most common misuse of the technology – ahead of cybercrime.
How can you spot a deepfake?
With 60% of US adults concerned about deepfakes, being able to identify them is becoming a crucial skill.
The MIT Media Lab has offered some tell-tale signs to look out for in deepfake visuals, including:
1. Blinking and lip movements: study them to see if they are following natural rhythms.
2. Reflections in eyes and glasses: are they consistent and do they make visual sense?
3. Does the age of the skin match that of the eyes and hair?
It’s all in the eyes, according to new research from the University of Hull in the UK. If the subject has a matching reflection in each eye, it’s likely a real image, but if there is inconsistency in the two reflections, it’s probably a fake.
"There are false positives and false negatives; it's not going to get everything,” cautions Professor Kevin Pimbblet, one of the researchers. “But this method provides us with a basis, a plan of attack, in the arms race to detect deepfakes."
Deepfakes and the law
It’s not just politicians and celebrities who can suffer as a result of deepfakes.
Deepfake pornography – in which a person’s image is manipulated so that it appears they are engaging in a sexual act – is becoming a big issue, particularly in South Korea, which is experiencing “a digital sex crime epidemic”, the BBC reports.
Police in South Korea have dealt with 297 deepfake sex crimes so far this year, says Reuters, with the majority of both victims and perpetrators being teenagers.
In the UK, over half of under-18s are concerned about becoming a victim of deepfake pornography and the government is taking steps to tackle the issue, making it a criminal offence to create a sexually explicit deepfake image.
There is currently no legislation in the US that bans, or even regulates, deepfakes – despite a reported 1740% increase in deepfake fraud in 2023. The Federal Trade Commission is, however, drafting new laws that would ban the production and distribution of deepfakes that impersonate individuals.
How can we tackle deepfakes?
While the law scrambles to keep up, here are some other ways businesses and society are trying to tackle the deepfake problem:
Stanford University is using AI to spot AI misuse, for example. The same tool it originally developed to help editors seamlessly insert or delete spoken words from videos was co-opted by people making deepfakes. To combat this, Stanford researchers have now created a tool that can detect the lip-synch technology in 80% of fake cases.
Blockchain technology can help to combat deepfakes, says Scott Doughman, Chief Business Officer at Seal Storage Technology, as it can “tighten the vulnerabilities associated with single points of failure”.
Educating the public to have a zero-trust mindset when it comes to online content is another key deterrent, argues Anna Maria Collard, Senior Vice-President at KnowBe4 Africa.
“Liveness verification” is becoming an essential extra element in biometric identity systems, says Ricardo Amper, Founder and CEO at Incode Technologies. “For instance, if a security check requires a selfie for facial recognition, a criminal could try to present a photo or video instead of a real-time life selfie. Liveness detection can combat these efforts by helping to determine if it’s a real, live person in the selfie.”
The World Economic Forum’s Typology of Online Harms, which classifies deepfakes under a ‘deceptive synthetic media’ banner, aims to provide a common, universal language to advance digital safety. While it recognizes that AI is an ever-evolving technology that “may give rise to new forms of harm or exacerbate existing ones”, the intention is to provide a comprehensive framework from which stakeholders can create a safer digital ecosystem.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Emerging TechnologiesSee all
Michele Mosca and Donna Dodson
December 20, 2024