Emerging Technologies

Does the UN need a watchdog to fight deepfakes and other AI threats?

Deepfakes are synthetic media that have been manipulated to falsely portray individuals saying or doing things they never actually did.

Deepfakes are synthetic media that have been manipulated to falsely portray individuals saying or doing things they never actually did. Image: Freepik/ DCStudio

Dr. Jean-Marc Rickli
Head of Global and Emerging Risks , Geneva Centre for Security Policy (GCSP)
  • Deepfakes are synthetic media that have been manipulated to falsely portray individuals saying or doing things they never actually did.
  • Deepfakes can be used to spread misinformation, damage reputations and even influence elections.
  • There is currently no global regulatory framework to govern the use of deepfakes.

Less than a month after Russia's invasion of Ukraine, a video surfaced on social media that purportedly showed Ukrainian President Volodymyr Zelenskyy urging his soldiers to surrender their arms and abandon the fight against Russia. While the lip-sync in the video appeared somewhat convincing, discrepancies in Mr Zelenskyy's accent, as well as his facial movements and voice, raised suspicions about its authenticity.

Upon closer examination, a simple screenshot revealed that the video was indeed a fake – a deepfake. This marked the first known instance of a deepfake video being utilised in the context of warfare.

Deepfakes are synthetic media, including audio, images, or videos, that have been manipulated and altered to falsely portray individuals saying or doing things they never actually did.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

On June 5, Russian President Vladimir Putin declared martial law and military mobilisation in the regions bordering Ukraine, announcing these measures through various Russian radio and television networks. But it was soon discovered that Mr Putin's speech was also a fabrication – a deepfake broadcast through hacked TV and radio channels. The deepfake was so convincing that it prompted Russian officials in the Belgorod region to issue warnings, cautioning the population against falling prey to the deepfake's intended to “sow panic among peaceful Belgorod residents”.

The rise of deepfakes serves as a vivid illustration of the exponential growth of artificial intelligence and the challenges it poses to both national and international governance. Deepfake technology, fuelled by the invention in 2014 of generative adversarial networks (GANs) – a type of machine learning framework – aims to create new content by pitting two neural networks against each other in a competitive fashion.

A later release of ChatGPT on March 14 outperformed 90 per cent of aspiring lawyers attempting to pass the US bar exam.
A later release of ChatGPT on March 14 outperformed 90 per cent of aspiring lawyers attempting to pass the US bar exam. Image: AP

By 2018, GANs had advanced to the point where they could generate, for instance, highly realistic images of individuals who have never actually existed. In Autumn 2017, the first deepfake videos were uploaded on Reddit. These initial deepfakes involved merging the faces of Hollywood actresses onto the bodies of performers in adult videos. In less than two years, almost 15,000 deepfake videos had been identified online, with an alarming 96 per cent of them falling into the category of adult content. Moreover, 100 per cent of the victims depicted in these videos were women.

Disturbingly, it was reported earlier this year that paedophiles are now employing deepfakes to create explicit images of child abuse. One paedophile in Quebec, Canada was recently convicted after the police discovered 545,000 pictures and videos of children on his computer, with 86,000 of them being deepfakes generated from real children's images collected from social media, particularly Facebook.

Deepfake technology has also demonstrated its potential for other nefarious purposes beyond exploiting individuals. It can be employed to alter medical scans, creating fake tumours or removing real ones, or manipulate satellite images to fabricate entire geographical features or deepfake geography. The implications are profound, posing risks not only to personal privacy but also to various sectors, including healthcare and national security.

On November 30, 2022, OpenAI, an American artificial intelligence laboratory, released ChatGPT, an AI chatbot. Within five days, ChatGPT garnered five million users. It took Netflix three-and-a-half years to reach the same milestone. After just two months, the application boasted 100 million users, making it the fastest-growing consumer application in history - until it was overtaken by Meta’s app Threads this month. While the first iteration of ChatGPT (ChatGPT 3.5) achieved a mediocre score (10th percentile) on the US Uniform Bar Exam, the subsequent release of ChatGPT 4 on March 14, 2023, outperformed 90 per cent of aspiring lawyers attempting to pass the bar.

In a recent experiment, MIT associate professor and GCSP polymath fellow Kevin Esvelt and his students utilised freely accessible "large language model" algorithms like GPT-4 to devise a detailed roadmap for obtaining exceptionally dangerous viruses. In just one hour, the chatbot suggested four potential pandemic pathogens, provided instructions for generating them from synthetic DNA, and even recommended DNA synthesis companies unlikely to screen orders. Their conclusion was alarming: easy access to AI chatbots will lead “the number of individuals capable of killing tens of millions to dramatically increase”.

The growing accessibility of generative AI presents not only opportunities, but also immense risks, including targeted manipulations at the individual level. A recent study revealed that AI-generated responses to patient queries outperformed physicians' responses in terms of quality and empathy. Empathy, the intrinsically human ability to understand another person's feelings from their perspective rather than our own, is now being surpassed by chatbots. This should serve as a wake-up call for governments, as it opens the door to potential large-scale subversion campaigns and gives rise to a new form of warfare –cognitive warfare – where public opinion is weaponised to influence policy and destabilise public institutions. Generative AI and tools such as ChatGPT could be soon considered as weapons of mass deception.

These examples underscore the exponential pace at which AI is advancing. The challenge lies in the fact that humans and organisations tend to think in a linear fashion when considering future developments. Faced with exponential growth, such as the rapid spread of the Covid-19 pandemic, many governments have often demonstrated slow and ill-suited responses.

UAE minister calls for global coalition to regulate artificial intelligence
UAE minister calls for global coalition to regulate artificial intelligence

In an era defined by emerging exponential technologies, global and national governance must adapt to become more reactive and anticipatory. Strategic foresight, the ability to envision and act upon potential futures, should become a standard procedure for any organisation engaged in national and global governance. This necessitates the inclusion of diverse skills and profiles among those working within these institutions. Furthermore, effectively addressing the consequences of exponential technological transformations requires the ability to identify weak signals, highlighting the need to promote polymaths – individuals with knowledge spanning various subjects – to break free from silo thinking and groupthink.

Have you read?

On July 18, the UN Security Council will convene its first-ever meeting to discuss the potential threats posed by artificial intelligence to international peace and security. The UN already addresses certain aspects of this issue through, for instance, the Governmental Group of Experts (GGE) on Lethal Autonomous Weapons (LAWS), which examines the potential impact of autonomous weapons on international humanitarian law and possible regulations or bans. However, autonomous weapons also have profound implications for strategic stability, an area hardly discussed by the GGE.

AI represents a dual-use technology even more transformative than electricity, and therefore has profound international security implications. The UN Secretary-General recently expressed support for the establishment of a UN agency on AI, similar to the International Atomic Energy Agency. Such an agency, focused on knowledge and endowed with regulatory powers, could enhance co-ordination among burgeoning AI initiatives worldwide and promote global governance on AI.

To succeed, however, the UN must transcend its traditional intergovernmental DNA and incorporate the scientific community, private sector (the primary source of AI innovations) and civil society into new governance frameworks, including public-private partnerships. As was mentioned in the recent UN AI for Good Summit in Geneva, the city, well endowed with a governance ecosystem conducive to such initiatives, presents an ideal venue for materialising this vision.

The deepfake and generative AI quandary serves as a sobering reminder of the immense power and multifaceted security challenges posed by artificial intelligence. In the pursuit of responsible AI governance, we must prioritise the protection against malevolent exploitation while nurturing an environment that encourages ethical innovation and societal progress.

Embracing strategic foresight, unshackling ourselves from linear thinking, and fostering diverse collaborations and security by design are crucial steps towards collectively shaping an AI-powered future that upholds ethical principles, preserves democratic values and secures the well-being of humanity in the face of transformative technological landscapes. By forging this path, we can pave the way for a more equitable, secure and prosperous society in the age of AI.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Emerging Technologies

Related topics:
Emerging TechnologiesCybersecurity
Share:
The Big Picture
Explore and monitor how Cybersecurity is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

Here’s why it’s important to build long-term cryptographic resilience

Michele Mosca and Donna Dodson

December 20, 2024

How digital platforms and AI are empowering individual investors

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum