Emerging Technologies

In a world of deepfakes, we must build a case for trustworthy synthetic AI content

AI has made synthetic content such as deepfakes easier to create, but the use cases aren't all bad.

AI has made synthetic content such as deepfakes easier to create, but the use cases aren't all bad. Image: Getty Images/metamorworks

Beena Ammanath
Board Member, Centre for Trustworthy Technology
  • The rise of artificial intelligence (AI) technology has fuelled the use of deepfake content, including artificial video, audio and images.
  • While there are many negative examples of deepfakes, such synthetic content can also bring benefits.
  • Working together to learn about synthetic content will help identify the dangerous deepfakes while also creating positive use cases for this technology.

It’s often been said that seeing is believing, but then deepfake technology came along. While photos, video and audio recordings were once the gold standard for proof of something real, new artificial intelligence (AI) models that can create realistic (but fake) media signals a paradigm shift in how people decide to trust content.

The term deepfake refers to manufactured text, sounds or images that exist only as a digital fabrication ­– “deep” learning + “fake” outputs. This subset of synthetic content has been developing for years, but the release of large language models (LLMs) and other kinds of generative AI have fueled free and low-cost applications that require little technical skill to use. Everyone with an internet connection now has the keys to synthetic data generators.

Of course, this genie is out of the bottle and we can’t put it back, even if we wanted to. But kneejerk reactions to the negative use of this technology, such as outright bans or intentional “pauses” in innovation, will not solve the problem. As with so much of AI, synthetic content is in a transitional period in which ethics and trust are in flux.

Right now, we are on one side of a turbulent river of synthetic content and the future lies on the other bank. We must build a bridge of minimal harm and maximum benefit without being swept away in the process. Construction has already begun and the challenge for policymakers, enterprise leaders, technologists and others is to help society make the crossing safely. This means getting to a place where this technology is common, familiar and trustworthy.

The question is how?

Looking beyond deepfakes

Prominent examples of deepfakes have already led to bad outcomes: high school students creating explicit images of their classmates, artificial video and audio of prominent journalists designed to mislead audiences, and a fake image of an attack on the US Pentagon that caused the stock market to briefly drop. The public is aware of these risks. A 2023 survey asked Americans about their AI concerns and 60% said they were “very concerned” about deepfakes, more than any other AI risk to society.

But advocates and innovators who see valid and beneficial applications for synthetic media are pushing back against the dimmest forecasts. Some companies are offering a kind of “deepfake-as-a-service” business model. Filmmakers and content creators could benefit, but there are also more imaginative applications such as language translation – musician FKA Twigs has created a deepfake of herself that can use her tone of voice to speak in different languages. While not as exciting, synthetic data is also being explored as a substitute for human inputs in AI training.

Current approaches to synthetic content such as these show both the opportunities and the risks. But we also need to think bigger – and longer term – to create a future where synthetic content generators are even more realistic, cheaper and accessible. There is no one way to strip synthetic content of its risks and magnify its benefits. Like other technologies before it, gaining societal trust will hinge on how synthetic content is used and the guardrails that surround its development.

Loading...

Domains of responsibility around synthetic content

The responsibility to shape and use synthetic content in a trustworthy way must be shared among multiple groups.

Firstly, technology companies determine which tools to build and release, so their decisions matter. Is it a responsible decision to deploy a voice-cloning tool knowing the potential for misuse, for example? This is a particularly important question in a year when half the world’s population will vote in democratic elections.

The private sector more generally also has a role to play. Synthentic content has a range of uses for private companies, from personalized customer engagement to helping employees engage remotely. If customers feel deceived by how this technology is used or if employees perceive a threat to their work or wellbeing, however, it will impact trust.

NGOs, nonprofits, charities and other pro-social organizations can also explore how commercial applications for synthetic content can help advance equity, access to opportunity, technology trust and literacy. The challenge for this group is to reimagine uses for synthetic content that can offer societal value for all people equally.

Lawmakers and regulators are also in an important position when it comes to synthetic content. They must create rules around a technology that is changing faster than they can deliberate. Government bodies at multiple levels are attempting to enact laws (or reapply existing laws) to govern the use of synthetic content. But a whack-a-mole legislative approach may come up short if it is perpetually reactive and disjointed. Policymakers must take a coordinated, strategic approach to governing this fast-moving technology.

Have you read?

Finally, the general public has a two-fold responsibility. First, they must develop AI literacy to understand what this technology can do. This is not so different from how people have learned to participate in cybersecurity by watching for email scams and malicious downloads. Second, the public needs to appreciate the responsibility to use this technology in a productive way, which requires education. This means parents teaching children, employers educating their workforces and public figures informing the public.

Each group faces a knot of decisions that, in the aggregate, will determine whether synthetic content deserves our trust – or not – in the future. By crossing the deepfake bridge together, we can create a trustworthy future for synthetic content.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Emerging TechnologiesFourth Industrial Revolution
Share:
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

Here’s why it’s important to build long-term cryptographic resilience

Michele Mosca and Donna Dodson

December 20, 2024

How digital platforms and AI are empowering individual investors

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum