Year of elections: Lessons from India's fight against AI-generated misinformation
Misinformation and disinformation are the biggest short-term risks, according to Global Risks Report 2024. Image: REUTERS/Francis Mascarenhas
- The recently concluded federal election in India delivered numerous lessons for those monitoring misinformation generated using artificial intelligence.
- Misinformation and disinformation are the biggest short-term risks, according to Global Risks Report 2024.
- The Deepfakes Analysis Unit (DAU) is a unique resource in India that allows the public to share content via a WhatsApp channel to verify its authenticity.
The fervor that enveloped India during its recent federal election has subsided, but there are valuable lessons for those of us monitoring the rise of AI-generated misinformation. The nexus between misleading information or false information and societal unrest will take centre stage amid elections in several major economies in the next two years, as outlined in the World Economic Forum’s Global Risks Report 2024.
The Deepfakes Analysis Unit (DAU) – set up under the aegis of the Misinformation Combat Alliance – is a unique resource that engages with the public in India through a WhatsApp tipline. It allows users to share misleading or harmful audio and video content suspected to have been generated by artificial intelligence, in whole or in part.
The project was launched in March 2024, less than a month before the first phase of polling for the Indian election. Since then, the tipline has received and reviewed hundreds of unique audio and video files, excluding images. Videos form the bulk of the content we've received, and the use of generative AI has been detected in varying degrees.
What are the different types of synthetic media?
The DAU's work is guided by a set of evolving definitions, which help us classify audio and video content as 'deepfake' (created using AI), 'cheapfake' (created using basic editing software), 'manipulated' or 'AI-generated', with each category signifying how generative AI has been used in a piece of content.
Everything AI-generated is not necessarily a deepfake because it may not be harmful or misleading, and it may have been produced with the consent of the subject featured. The production quality of a cheapfake is poor compared to a deepfake which can easily be mistaken for real content.
Who is being targeted by synthetic media?
Our analysis showed that politicians, prominent business persons, actors, and news anchors are the main targets of those with malign intentions to manipulate or produce video and audio using AI. Packaging misinformation using well-known faces may increase the likelihood of people falling for it, such as featuring people in business in videos promoting financial scams.
In some cases, the lip movements of subjects were altered using AI tools so that synthetic speech could match their lip movements, even if not perfectly. This concept is referred to as a "lip-sync deepfake" – sometimes this might involve recreating the mouth of the subject using AI. Blurred lips or altered teeth are a few signs to watch when analysing videos to check for authenticity.
We received several videos via the tipline that used less sophisticated AI tools to produce the likeness of politicians uttering dialogues from Bollywood movies. Some other examples included blending or swapping the faces of Bollywood actors with those of politicians while retaining the other elements of the original video.
How can you spot a deepfake?
A bot guides users to share dubious audio or video for analysis each time they connect to the tipline; in the process, they can also learn some tips and tricks to identify synthetic media through short videos in four languages (English, Hindi, Tamil, and Telugu).
The DAU carefully reviews each audio and video sent to the tipline to verify its authenticity. If any element of AI is suspected, the DAU uses multiple AI detection tools available to it through existing and exploratory partnerships, and some free tools.
How is the World Economic Forum creating guardrails for Artificial Intelligence?
If at least three tools indicate AI manipulation in that piece of content, the DAU escalates it to its forensic and detection partners to receive expert analysis. Once that analysis is received, the DAU produces an assessment report, publishes it on its website, and sends the report to the user who submitted the audio or video to the tipline. These detailed reports correct the public record by identifying the content as synthetic or authentic. Baked into the methodology of these reports is a crash course in how to spot synthetic media.
What can be done to stop deepfakes?
Collaboration is the foundation that supports the DAU’s ability to function. While we share our analysis with our partner fact-checkers—comprising 12 media organizations working across multiple languages in India—we also include their fact-checks in the assessments we send to the tipline users, as the DAU’s mandate is to verify and not fact-check.
We rely on the expertise of our partners for content translations and to gauge regional misinformation trends.
Our mission is to grow our roster of partners so that we can contribute to the fact-checking ecosystem in India and globally. We are beginning to see the impact of our collaborative approach – for example, our escalations to ElevenLabs have led to them banning a series of users who misused their tools to generate synthetic voices to spread misinformation.
Lessons learnt so far?
During the election cycle, we saw fewer deepfakes, but more videos were manipulated using synthetic audio tracks. There were also many cheapfakes that used AI voices but did not use generative AI for the visual elements. These cheapfakes are a pervasive strain of misinformation that needs to be combated.
Audio generated using AI, in whole or in part, is more difficult to identify as tools are sometimes limited in their capacity to accurately identify AI-generated speech, especially if background noise or music has been mixed with the audio track. This is true for stand-alone audio or a track that is part of a video.
We aim to educate tipline users and contribute to the public discourse on best practices for identifying and labelling AI-generated content. We invite academics, detection experts, and social media platforms to share their feedback on how to standardize definitions. Perhaps the “Typology of Online Harms”, developed by the Global Coalition for Digital Safety, could serve as a reference to build a global industry standard in labelling such content for swift action against harmful and misleading AI-generated content.
The lessons we are learning at the DAU will be vital during the upcoming state elections in India, where the threat of AI manipulation only magnifies the misinformation challenges posed by language diversity and local context.
During this "super year" for elections, the lessons will also benefit many countries around the world that are interested in understanding how AI can be misused during an election cycle. The playbook remains the same: bad actors are using generative AI to harm and mislead, with the only differentiator being their access to sophisticated AI technology.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Related topics:
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Geographies in DepthSee all
Braz Baracuhy
December 19, 2024