Emerging Technologies

This AI can spot fake robbery reports

Domitilla Stefanini writes a love letter dictated by ghostwriter Micol Graziano, not seen, in Rome, Italy, March 19, 2018. Picture taken March 19, 2018.  REUTERS/Max Rossi - RC1D0CC9D230

No hiding place for liars. Image: REUTERS/Max Rossi

Olivia Goldhill
Weekend Writer, Quartz

There’s no foolproof way to know if someone’s verbally telling lies, but scientists have developed a tool that seems remarkably accurate at judging written falsehoods. Using machine learning and text analysis, they’ve been able to identify false robbery reports with such accuracy that the tool is now being rolled out to police stations across Spain.

Computer scientists from Cardiff University and Charles III University of Madrid developed the tool, called VeriPol, specifically to focus on robbery reports. In their paper, published in the journal Knowledge-Based Systems earlier this year, they describe how they trained a machine-learning model on more than 1000 police robbery reports from Spanish National Police, including those that were known to be false. A pilot study in Murcia and Malaga in June 2017 found that, once VeriPol identified a report as having a high probability of being false, 83% of these cases were closed after the claimants faced further questioning. In total, 64 false reports were detected in one week.

VeriPol works by using algorithms to identify the various features in a statement, including all adjectives, verbs, and punctuations marks, and then picking up on the patterns in false reports. According to a Cardiff University statement, false robbery reports are more likely to be shorter, focused on the stolen property rather than the robbery itself, have few details about the attacker or the robbery, and lack witnesses.

Have you read?

Taken together, these sound like common-sense characteristics that humans could recognize. But the AI proved more effective at unemotionally scanning reports and identifying patterns, at least compared to historical data: Typically, just 12.14 false reports are detected by police in a week in June in Malaga, and 3.33 in Murcia.

Of course, that doesn’t mean the tool is perfect. “[O]ur model began to identify false statements where it was reported that incidents happened from behind or where the aggressors were wearing helmets,” co-author of the study Dr Jose Camacho-Collados, from Cardiff University’s School of Computer Science and Informatics, said in a statement. Bad luck for those who really were robbed from behind or by those wearing a helmet.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Emerging Technologies

Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

Here’s why it’s important to build long-term cryptographic resilience

Michele Mosca and Donna Dodson

December 20, 2024

How digital platforms and AI are empowering individual investors

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum