Emerging Technologies

New technologies are helping to identify sophisticated AI deepfakes. Here’s how

An iris being scanned by biometric technology.

Biometric technology includes iris scans but new developments confirm 'liveness'. Image: Unsplash/Arteum.ro

Ricardo Amper
Founder and Chief Executive Officer, Incode Technologies
  • Deepfakes and other AI-based technology are helping identity thieves to develop increasingly sophisticated scams.
  • Both people and organizations are at risk from identity-based fraud, whether it’s a company hiring remotely or a person making a bank transfer.
  • Biometric technology provides a way to verify identities and it’s also becoming more sophisticated – moving beyond fingerprints and facial recognition to help people and organizations work out if the identity being verified even belongs to a real, live person.

How do you know if someone really is who they say they are? What was once a simple question has become far more complicated recently, given the rise of deepfake and other AI-based technology. Today, it’s not even just a question of whether someone is who they say they are, but whether there’s even a real person behind the “identity”.

Celebrities and other public figures aren’t the only ones this technology can be used against; fake identities can be used to scam anyone. More than half of the respondents (52%) to a recent study by Northeastern University said they couldn’t tell the difference between text and images created by humans and those created by AI.

We should all be aware of the potential for this kind of fraud as the world becomes increasingly digital and more of our daily interactions take place online. Since proving identity is now more important – but also harder – than ever, industry leaders must work out how their organizations can address this challenge. Biometric technology offer a great deal of hope.

The tricksters get trickier with deepfakes

From financial transactions to dating, hiring to healthcare, digital identity has become the cornerstone of many of our daily interactions. Organizations need to be able to prove someone really is who they say are they are, whether they’re interviewing a potential job candidate or authorizing a money transfer.

People also need assurance that the organizations they’re entrusting their sensitive information to will safeguard it. You want to know they are doing their due diligence to ensure that when someone requests a bank transfer from your account, it’s a legitimate request from a real person.

Examples of what can go wrong are rife. Deepfake audio or video of a trusted person within a company can be used to trick victims into handing over credentials or sensitive business information. Fake identities can be used to help scammers apply for and obtain remote jobs and then steal sensitive customer or company information.

Criminals have also devised social engineering scams, which exploit a person’s trust to obtain money or confidential information. For example, you might get an email requesting money that looks like it's coming from a legitimate organization that you regularly conduct transactions with, such as your bank or insurance company.

Such fraud can also be conducted via social media. Recent scams on Discord and Twitter aimed to provoke strong emotions like shame or fear to get users to hand over their login details for their social media accounts. Bad actors can then use those credentials to pose as the individuals and gain access to credit cards and bank accounts attached to them, and more.

And that’s just what attackers have come up with so far.

Industry leaders have to get a better handle on this situation. This means creating strong strategies for identity verification to ensure there’s a real person behind a process and prove that the person is who they claim to be. Identity verification isn’t just the right thing to do; it’s also often a legal requirement. The EU’s General Data Protection Regulation (GDPR) includes ID verification in its requirements, for example.

This isn’t just a private sector matter either; it also applies to public services. Citizens have the right to access the essential services of their governments, but to do so, these providers must be able to verify users’ identities.

Legacy authentication methods aren’t enough

More than half of all cyber attacks on government agencies, critical infrastructure and state-level government bodies involved valid accounts, including the use of credentials of former employees whose accounts hadn’t been disabled. So where do criminals get all this login information? One report revealed 775 million credentials for sale on the dark web, along with thousands of ads for “access-as-a-service” (a fee-based service that grants access to compromised systems).

Clearly, as these attackers and fraudsters become more sophisticated, legacy identity verification methods aren’t up to the job of protecting us anymore. Biometric technology offers a better way to verify identity and is becoming easier and better to use.

You’ve heard many times that to create the strongest password, you need a lengthy string of upper- and lowercase letters, numbers and symbols. This means the key to safety is a password that’s long and hard to remember. But this is one of the reasons that biometric ID verification is so secure. Attributes like your fingerprints or face become your password, and they’re unique to you. Plus, you don’t have to worry about forgetting your first pet’s name or what your favourite movie was when you first set up the account five years ago.

Have you read?

Biometric technology also offers a quick, seamless and affordable way for organizations to practice advanced identity verification. Their customers don’t need to remember complicated strings of numbers and letters, and verification can happen passively – think of how some people can unlock their cellphone with their face, for instance.

Plus, biometric technology is now advancing beyond facial, fingerprint and even iris recognition. This is a good thing given these methods’ dependence on images. Deepfakes have gotten so sophisticated and images alone can’t confirm whether a person is alive and well or just a photo on a screen, someone wearing a mask, or a print on paper.

New developments like liveness verification are becoming an essential element of biometric identity systems. For instance, if a security check requires a selfie for facial recognition, a criminal could try to present a photo or video instead of a real-time life selfie. Liveness detection can combat these efforts by helping to determine if it’s a real, live person in the selfie.

Fraudsters work tirelessly to take what isn’t theirs, including identities. As AI technology continues to advance, organizations have to work harder to ensure their customers’ and employees’ data and identities are safe – and that they are remotely hiring actual people who match their IDs. Today’s organizations can fight fire with fire, defeating deepfake and AI technologies with rapidly advancing biometric technology.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Emerging Technologies

Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Closing the AI equity gap: Trust and safety for sustainable development

Keyzom Ngodup Massally and Jennifer Louie

December 3, 2024

Why we're heading back to the Moon - and on to Mars

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum