Emerging Technologies

How do you spot a deepfake? This is what the experts say

A green wireframe model covers an actor's lower face during the creation of a synthetic facial reanimation video, known alternatively as a deepfake, in London.

Deepfake technology is being used by some to commit crimes. Image: REUTERS

Madeleine North
Senior Writer, Forum Stories
This article is part of: Centre for Cybersecurity
  • Deepfakes – realistic, but fake, videos, images or audio clips of people – are being used to commit financial and sexual crimes.
  • Experts are sharing tips on how to spot a deepfake, but they are not foolproof.
  • The World Economic Forum’s Typology of Online Harms aims to provide a common, universal language to advance digital safety.

When Taylor Swift is giving away cookware or Donald Trump is dancing with Elon Musk, our deepfake antennae start to twitch.

But deepfakes – in which AI is used to create hyper-realistic, but fake, videos, images or audio clips of people – are not always light-hearted or easy to spot.

In late 2023, deepfake audio recordings derailed a top candidate in Slovakia’s elections. In early 2024, an employee in Hong Kong was duped into paying out $25 million to fraudsters after they faked a video conference call with him.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

In a record year for elections around the world, political candidates are understandably nervous about the situation. And they’re right to be – new research reveals that deepfakes that impersonate politicians and celebrities “to influence public opinion” are the most common misuse of the technology – ahead of cybercrime.

Concerns among adults in the United States about spread of artificial intelligence (AI) video and audio deep fakes as of August 2023, by gender
At least 6 in 10 people in the US are concerned about deepfakes. Image: Statista

How can you spot a deepfake?

With 60% of US adults concerned about deepfakes, being able to identify them is becoming a crucial skill.

The MIT Media Lab has offered some tell-tale signs to look out for in deepfake visuals, including:

1. Blinking and lip movements: study them to see if they are following natural rhythms.

2. Reflections in eyes and glasses: are they consistent and do they make visual sense?

3. Does the age of the skin match that of the eyes and hair?

A series of deepfake eyes showing inconsistent reflections in each eye.
Deepfake eyes often have an inconsistency in their reflections, as indicated by the green and red marks, above right. Image: Adejumoke Owolabi/University of Hull

It’s all in the eyes, according to new research from the University of Hull in the UK. If the subject has a matching reflection in each eye, it’s likely a real image, but if there is inconsistency in the two reflections, it’s probably a fake.

"There are false positives and false negatives; it's not going to get everything,” cautions Professor Kevin Pimbblet, one of the researchers. “But this method provides us with a basis, a plan of attack, in the arms race to detect deepfakes."

Deepfakes and the law

It’s not just politicians and celebrities who can suffer as a result of deepfakes.

Deepfake pornography – in which a person’s image is manipulated so that it appears they are engaging in a sexual act – is becoming a big issue, particularly in South Korea, which is experiencing “a digital sex crime epidemic”, the BBC reports.

Police in South Korea have dealt with 297 deepfake sex crimes so far this year, says Reuters, with the majority of both victims and perpetrators being teenagers.

Loading...

In the UK, over half of under-18s are concerned about becoming a victim of deepfake pornography and the government is taking steps to tackle the issue, making it a criminal offence to create a sexually explicit deepfake image.

There is currently no legislation in the US that bans, or even regulates, deepfakes – despite a reported 1740% increase in deepfake fraud in 2023. The Federal Trade Commission is, however, drafting new laws that would ban the production and distribution of deepfakes that impersonate individuals.

How can we tackle deepfakes?

While the law scrambles to keep up, here are some other ways businesses and society are trying to tackle the deepfake problem:

Stanford University is using AI to spot AI misuse, for example. The same tool it originally developed to help editors seamlessly insert or delete spoken words from videos was co-opted by people making deepfakes. To combat this, Stanford researchers have now created a tool that can detect the lip-synch technology in 80% of fake cases.

Blockchain technology can help to combat deepfakes, says Scott Doughman, Chief Business Officer at Seal Storage Technology, as it can “tighten the vulnerabilities associated with single points of failure”.

Educating the public to have a zero-trust mindset when it comes to online content is another key deterrent, argues Anna Maria Collard, Senior Vice-President at KnowBe4 Africa.

“Liveness verification” is becoming an essential extra element in biometric identity systems, says Ricardo Amper, Founder and CEO at Incode Technologies. “For instance, if a security check requires a selfie for facial recognition, a criminal could try to present a photo or video instead of a real-time life selfie. Liveness detection can combat these efforts by helping to determine if it’s a real, live person in the selfie.”

The World Economic Forum’s Typology of Online Harms, which classifies deepfakes under a ‘deceptive synthetic media’ banner, aims to provide a common, universal language to advance digital safety. While it recognizes that AI is an ever-evolving technology that “may give rise to new forms of harm or exacerbate existing ones”, the intention is to provide a comprehensive framework from which stakeholders can create a safer digital ecosystem.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

Here’s why it’s important to build long-term cryptographic resilience

Michele Mosca and Donna Dodson

December 20, 2024

How digital platforms and AI are empowering individual investors

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum