Heard about deepfakes? Don’t panic. Prepare
The first wave of maliciously synthesized media is likely to be here in 2019. Image: Witness
You may have noticed the panic around the threat of 'deepfakes'. They are one form of so-called 'synthetic media', which draw on advances in AI and machine learning to change elements of a photo, video or audio track, or to recreate a person's voice or face with life-like subtlety. The insidious idea of anyone’s face or voice being fake, or any video or photo untrustworthy, challenges our trust in reality.
To date, deepfakes have been used in a few instances to attack the credibility of journalists. They have also been widely used to co-opt celebrities non-consensually into pornographic content. But broader malicious uses to disrupt political debate, undermine national security, confuse human rights investigations and attack businesses and civil society groups are not yet widespread. We presently have an opportunity, before we enter the 'eye of the storm', to be proactive and non-alarmist in addressing threats. Now is the time to prepare, not panic, about deepfakes.
This why I have been leading efforts at WITNESS, the human rights organization focused on the power of video and technology for good, to bring together key people to develop pragmatic approaches. We recently held the first-ever expert meeting connecting technologists, industry insiders, researchers, human rights investigators and journalists to shape the solutions shared here.
Why do deepfakes matter? Our culture is increasingly structured around images. We chronicle our lives on Snapchat and Instagram and use YouTube as our encyclopaedia. At WITNESS, we’ve seen how this matters for exposing injustice and impunity, from the civic journalists and activists documenting Syria’s civil war to how the Movement for Black Lives coalesced around visual evidence of police violence. Yet we are also cognitively ill-equipped to discern fake images from fact, and to distinguish fabrication from remembered reality in memory. Meanwhile the institutions of fact-checking and verification at scale are only just starting to catch up with fake audio, images and video.
Firstly, we must build on what already exists. In human rights and journalism, there is a strong practice of what is known as open-source intelligence (OSI), used to test the veracity of images and the credibility of sources found online. It is built on the last decade’s experience of what we might call shallowfakes - an issue that WITNESS’ Media Lab has frequently researched. Examples of ‘shallowfakes’ include audio manipulations on video to impugn refugees in Europe, mis-contextualized videos shared in messaging apps to incite communal violence and create ‘digital wildfire’, and deliberately altered videos to undermine war crimes investigations.
We must connect the researchers who are building new detection tools for deepfakes to journalists who use OSI techniques in newsrooms around the world. If we fear the use of these manipulation tools in upcoming elections, we need to support cross-company misinformation prevention efforts, such as the current Comprova project in Brazil. Newsrooms must also continue to support each other to push back on the pernicious threat of plausible deniability, which uses the existence of deepfakes as ammunition for the claim that you cannot believe anything you see or hear.
Secondly, we should ensure that we ask the right questions about the technology infrastructure we want to combat deepfakes. One common response to deepfakes is to demand that every image have a clear provenance, including the ability to be traced back to an initial source and for any edits made to be checked. The use of blockchain or another form of distributed media ledger is a frequent element in this solution.
In principle this sound good, but in practice serious questions are raised about how to do it - not only technically but also in a way that does not create its own harms. For example, how will people in highly risky situations be able to protect their identity or revoke a piece of media authentication which might create danger for them later? How will we avoid a disproportionate ratchet effect, by which the introduction of a new technology excludes individuals without the latest tools? In our current information environment, such individuals would be the human rights defenders and civic journalists in Myanmar and Brazil who already face 'fake news' claims. We need a concentrated effort that brings together companies, from chipmakers to platforms, in a meaningful dialogue about how to pursue this in a responsible way.
We must also address the role of moderation on social media and video-sharing platforms. As the UN Special Rapporteur on Freedom of Expression has recently noted, platforms should base their policies in international human rights law, including freedom of expression, and introduce transparency around decision-making. Many deepfakes will be creative content, personal communication or satire. Malicious uses will hopefully be a small minority, and we should not take decisions that risk even a small false positive of incorrect removals.
This poses the question of what a synthetic media item actually is. Are we looking for face-swaps, or subtle changes in the background of an image, or even the blurring effect of portrait mode on your smartphone? Platforms will need to do better at signalling the presence of AI-facilitated image manipulation.
There is an incentive here for platforms to collaborate (and where necessary make decisions on malicious content that might, for example, drive imminent harm), as they have done on issues such as counter-violent extremism, spam and child exploitation imagery. This incentive to collaborative is also technical. In the process of creating these AI manipulations, the forger has the technical advantage, until there are enough examples of a particular technique to build into a detection approach.
Recent advances in detection, including the FaceForensics database, rely on AI receiving training data, such as images created using the latest forgery approaches. By sharing what they encounter, platforms will have the best information to detect manipulations. They could also share insights on the signals of bad actor activity that accompany use of deepfakes - for example, the use of bots or other attempts at manipulation. Otherwise, bad actors will exploit the weakest link - the platform with the poorest detection tools.
Finally, we need to think about what new literacies the broader public needs in order to grapple with more readily faked and individualized audio, video and photos. As with the solutions above, these must relate to broader approaches to understanding how people engage with and share misinformation and disinformation. To start with, we need research on how we help people better assess credibly faked visual content. And we need to research how to guide user experience better in the places where people will encounter this content: search engines, social networks, messaging apps and video platforms.
At a recent threat modelling workshop convened by WITNESS, one participant suggested that we need to 'build people’s confidence' that they can discern true from false in images. Part of this is making visible what a machine can detect in synthesized content, when we know the human eye will fail - for example, by showing a digital heatmap of manipulated areas.
The first wave of deepfakes and maliciously synthesized media is likely to be upon us in 2019. We have the opportunity to be prepared.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
The Digital Economy
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on CybersecuritySee all
Akshay Joshi
November 4, 2024