Curbing misinformation in India: How does a fact-checking WhatsApp helpline work?
Meta has partnered with India’s Misinformation Combat Alliance (MCA) to operate the helpline. Image: REUTERS/Dado Ruvic
- Meta is launching a helpline in India to help detect deepfake content on WhatsApp.
- The Indian prime minister has warned of the potential damage AI could cause.
- The Forum’s Global Coalition for Digital Safety is working to enhance online literacy.
Meta has announced it is to launch a new helpline and fact-checking service in India aimed at preventing the spread of AI-generated “deepfake” content on its WhatsApp messaging service.
This intervention is timely. In 2024, half the world’s population will vote in elections. The biggest election of all will take place in India, where almost 987 million people are registered to vote in a ballot to be held in April and May.
Indian Prime Minister Narendra Modi has warned that deepfake videos and other manipulated content could pose a serious risk to society in his country, the world’s biggest democracy.
Speaking to students in December 2023, he said: "These videos look very real and, therefore, we need to be very careful before believing the authenticity of a video or an image.”
Earlier, in September 2023, Modi used an address at the G20 Summit to call for global regulations for AI. India has also issued direct warnings to tech companies, reminding them of their responsibilities to prevent the posting and distribution of deepfake content.
How will Meta’s deepfake helpline work?
Meta has partnered with India’s Misinformation Combat Alliance (MCA) to operate the helpline.
WhatsApp users will be able to report suspicious content via a chatbot in the app. The helpline will provide multilingual support in English, Hindi, Tamil and Telugu, ensuring a wide cross-section of the Indian population can access the helpline.
At the heart of the service is the Deepfake Analysis Unit, a specialized MCA team that will handle all messages flagged through the WhatsApp helpline. This unit will work with 11 fact-checking organizations, industry partners and digital labs to scrutinize and debunk misinformation in deepfake content.
How is the Forum tackling global cybersecurity challenges?
Meta and the MCA have built the system with a commitment to four outcomes: detection, prevention, reporting and awareness. This strategy is designed to not only combat the spread of deepfakes but also to educate the public about the dangers of AI-generated misinformation.
Bharat Gupta, President of the Misinformation Combat Alliance, says the service will play a crucial role in defending Indian democracy.
“The Deepfakes Analysis Unit (DAU) will serve as a critical and timely intervention to arrest the spread of AI-enabled disinformation. With Meta’s support …we hope the DAU will become a trusted resource for the public to discern between real and AI-generated media.”
How to spot a deepfake
To raise the alert about potential deepfakes, the Meta helpline relies on WhatsApp users being able to detect synthetic content from the real thing. That’s a challenge given how realistic deepfakes can appear and sound.
But there are clues to look out for when trying to judge if a piece of content is genuine – or a deepfake. Certain features in deepfakes can give the game away, according to cybersecurity company Norton. These include unnatural eye movement and facial expressions, a lack of emotion and abnormal skin tone.
Enhancing digital media literacy
The growth of deepfake content highlights a need to enhance digital media literacy. Raising awareness of the existence of deepfakes and the risks they pose will help ensure a safer online environment for all.
The Forum’s Global Coalition for Digital Safety (GCDS) aims to accelerate public-private cooperation to tackle harmful content online and will serve to exchange best practices for new online safety regulations.
Under four central pillars, the GCDS is developing a set of principles on digital safety, a toolkit for safe product design and innovation, a digital safety risk assessment framework and best practice for developing media literacy programmes.
Meta’s deepfake helpline in India demonstrates there are technical solutions to improving online safety and protecting democracy from those who would distort reality using AI.
But we will all need to educate ourselves as we enter an era in which AI has the potential to blur the line between fact and fiction, in ways we have never seen before.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Emerging TechnologiesSee all
Filipe Beato and Jamie Saunders
November 21, 2024