How timing could be the key to combatting misinformation online
Researchers looked at the effectiveness presenting participants with a fact-checking labels when reading headlines. Image: Unsplash/Glenn Carstens-Peters
Peter Dizikes
Writer, MIT News Office- An MIT study presented participants with fact-checking labels when reading headlines.
- Researchers found the labels that came after the false headline were more effective at reducing people's misclassification of headlines.
- The research helps inform tools that social media platforms could use, as they look for better methods to label and limit the flow of misinformation online.
The battle to stop false news and online misinformation is not going to end any time soon, but a new finding from MIT scholars may help ease the problem.
In an experiment, the researchers discovered that fact-checking labels, when attached to online news headlines, actually work better after people read false headlines, compared to when they precede the headline or accompany it.
“We found that whether a false claim was corrected before people read it, while they read it, or after they read it influenced the effectiveness of the correction,” says David Rand, an MIT professor and co-author of a new paper detailing the study’s results.
Specifically, the researchers found, when “true” and “false” labels were shown immediately after participants in the experiment read headlines, it reduced people’s misclassification of those headlines by 25.3 percent. By contrast, there was an 8.6 percent reduction when labels appeared along with the headlines, and a 5.7 percent decrease in misclassification when the correct label appeared beforehand.
“Timing does matter when delivering fact-checks,” says Nadia M. Brashier, a cognitive neuroscientist and postdoc at Harvard University, and lead author of the paper.
The paper, “Timing Matters When Correcting Fake News,” appears this week in Proceedings of the National Academy of Sciences. The authors are Brashier; Rand; Gordon Pennycook, an assistant professor of behavioral science at University of Regina’s Hill/Levene Schools of Business; and Adam Berinsky, the Mitsui Professor of Political Science at MIT and the director of the MIT Political Experiments Research Lab.
To conduct the study, the scholars ran experiments with a total of 2,683 people, who looked at 18 true news headlines from major media sources and 18 false headlines that have been debunked by the fact-checking website snopes.com. Treatment groups of participants saw “true” and “false” tags before, during, or after reading the 36 headlines; a control group did not. All participants rated the headlines for accuracy. One week later, everyone looked at the same headlines, without any fact-check information at all, and again rated the headlines for accuracy.
The findings confounded the researchers’ expectations.
“Going into the project, I had anticipated it would work best to give the correction beforehand, so that people already knew to disbelieve the false claim when they came into contact with it,” Rand says. “To my surprise, we actually found the opposite. Debunking the claim after they were exposed to it was the most effective."
But why might his approach — “debunking” rather than “prebunking,” as the researchers call it — get the best results?
The scholars write that the results are consistent with a “concurrent storage hypothesis” of cognition, which proposes that people can retain both false information and corrections in their minds at the same time. It may not be possible to get people to ignore false headlines, but people are willing to update their beliefs about them.
“Allowing people to form their own impressions of news headlines, then providing ‘true’ or ‘false’ tags afterward, might act as feedback,” Brashier says. “And other research shows that feedback makes correct information ‘stick.’” Importantly, this suggests that the results might be different if participants did not explicitly rate the accuracy of the headlines when being exposed to them — for example, if they were just scrolling through their news feeds.
Overall, Berinsky suggests, the research helps inform tools that social media platforms and other content providers could use, as they look for better methods to label and limit the flow of misinformation online.
“There is no single magic bullet that can cure the problem of misinformation,” says Berinsky, who has long studied political rumors and misinformation. “Studying basic questions in a systematic way is a critical step toward a portfolio of effective solutions. Like David, I was somewhat surprised by our findings, but this finding is an important step forward in helping us combat misinformation.”
The study was made possible through support to the researchers provided by the National Science Foundation, the Ethics and Governance of Artificial Intelligence Initiative of the Miami Foundation, the William and Flora Hewlett Foundation, the Reset Project of Luminate, the Social Sciences and Humanities Research Council of Canada, and Google.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Media, Entertainment and Sport
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.