This is how AI can help identify biases in news media
Artificial intelligence can help identify biases in news reporting. Image: Unsplash/tetrakiss
- Artificial intelligence can help identify biases in news reporting, according to researchers.
- A study compared simulated news coverage to actual reporting of COVID-19 - and found a marked difference.
- Researchers say it opens up new avenues of study in which AI could be used to model American Supreme Court decision-making, for instance.
Artificial intelligence can help identify biases in news reporting that we wouldn’t otherwise see, researchers report.
For a new study, researchers got a computer program to generate news coverage of COVID-19 using headlines from Canadian Broadcast Corporation (CBC) articles as prompts. They then compared the simulated news coverage to the actual reporting at the time.
The findings show that CBC coverage was less focused on the medical emergency and more positively focused on personalities and geo-politics.
“Reporting on real-world events requires complex choices, including decisions about which events and players take center stage,” says Andrew Piper, professor of languages, literatures, and cultures at McGill University. “By comparing what was reported with what could have been reported, our study provides perspective on the editorial choices made by news agencies.”
Evaluating these alternatives is critical given the close relationship between media framing, public opinion, and government policy, according to the researchers.
“The AI saw COVID-19 primarily as a health emergency and interpreted the events in more bio-medical terms, whereas the CBC coverage tended to focus on person- rather than disease-centered reporting.
“The CBC coverage was also more positive than expected given that it was a major health crisis—producing a sort of rally around the flag effect. This positivity works to downplay public fear,” Piper says.
While a lot of studies seek to understand the biases inherent in AI, there’s also an opportunity to harness it as a tool to reveal the biases of human expression, say the researchers. “The goal is to help us see things we might otherwise miss,” Piper says.
“We’re not suggesting that the AI itself is unbiased. But rather than eliminating bias, as many researchers try to do, we want to understand how and why the bias comes to be,” says Sil Hamilton, a research assistant and student working under Piper’s supervision.
For the researchers, this work is just the tip of the iceberg, opening new avenues of study where AI can be used not only to look at past human behavior, but to anticipate future actions. For example, in forecasting potential political or judicial outcomes.
Hamilton is currently leading a team working on a project using AI to model American Supreme Court decision-making.
“Given past judicial behavior, how might justices respond to future pivotal cases or older cases that are being re-litigated? We hope new developments in AI can help,” he says.
How is the World Economic Forum ensuring the responsible use of technology?
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Emerging TechnologiesSee all
Jennifer Goldsack and Shauna Overgaard
November 14, 2024