Emerging Technologies

How one AI model is tackling clickbait

A woman uses a computer in the lounge area of the 27th Chaos Communication Congress (27C3) in Berlin, December 27, 2010. The annual four-day conference, organized by the Chaos Computer Club (CCC), offers lectures and workshops and attracts an international audience of hackers, scientists, artists, and utopians.   REUTERS/Thomas Peter (GERMANY - Tags: SCI TECH) - GM1E6CS076W01

A study has revealed differences in how people and machines approached the creation of headlines. Image: REUTERS/Thomas Peter

Matthew Swayne

With training from humans and machines, an artificial intelligence model can outperform other clickbait detectors, according to new research.

In addition, the new AI-based solution was also able to tell the difference between headlines that machines—or bots—generated and ones people wrote, they says.

In a study, the researchers asked people to write their own clickbait—an interesting, but misleading, news headline designed to attract readers to click on links to other online stories. The researchers also programmed machines to generate artificial clickbait. Then, researchers used the headlines from people and machines as data to train a clickbait-detection algorithm.

The resulting algorithm’s ability to predict clickbait headlines was about 14.5% better than other systems, according to the researchers, who released their findings at the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis.

Feeding the algorithm

Beyond its use in clickbait detection, the team’s approach may help improve machine learning performance in general, says Dongwon Lee, the principal investigator of the project and an associate professor in the College of Information Sciences and Technology and an affiliate of Institute for CyberScience at Penn State.

“This result is quite interesting as we successfully demonstrated that machine-generated clickbait training data can be fed back into the training pipeline to train a wide variety of machine learning models to have improved performance,” says Lee.

“This is the step toward addressing the fundamental bottleneck of supervised machine learning that requires a large amount of high-quality training data.”

According to Thai Le, a doctoral student in the College of Information Sciences and Technology, one of the challenges confronting the development of clickbait detection is the lack of labeled data. Just like people need teachers and study guides to help them learn, AI models need data that are labeled to help them learn to make the correct connections and associations.

“One of the things we realized when we started this project is that we don’t have many positive data points,” says Le. “In order to identify clickbait, we need to have humans label that training data. There is a need to increase the amount of positive data points so that, later on, we can train better models.”

Hunting for clickbait

While finding clickbait on the internet can be easy, its many variations add another layer of difficulty, according to S. Shyam Sundar, professor of media effects and codirector of the Media Effects Research Laboratory.

“There are clickbaits that are lists, or listicles; there are clickbaits that are phrased as questions; there are ones that start with who-what-where-when; and all kinds of other variations of clickbait that we have identified in our research over the years,” says Sundar. “So, finding sufficient samples of all these types of clickbait is a challenge. Even though we all moan about the number of clickbaits around, when you get around to obtaining them and labeling them, there aren’t many of those datasets.”

According to the researchers, the study reveals differences in how people and machines approached the creation of headlines. Compared to the machine-generated clickbait, headlines generated by people tended to have more determiners—words such as “which” and “that”—in their headlines.

Training also seemed to prompt differences in clickbait creation. For example, trained writers, such as journalists, tended to use longer words and more pronouns than other participants. Journalists also were likely to use numbers to start their headlines.

The researchers plan to use these findings to guide their investigations into a more robust fake-news detection system, among other applications, according to Sundar.

“For us, clickbait is just one of many elements that make up fake news, but this research is a useful preparatory step to make sure we have a good clickbait detection system set up,” says Sundar.

To find human clickbait writers for the study, the researchers recruited journalism students and workers from Amazon Turk, an online crowdsource site. They recruited 125 students and 85 workers from the site. The participants first read a definition of clickbait and then researchers asked them to read a short—about 500 words—article. They then asked participants to write a clickbait headline for each article.

A machine learning model called a Variational Autoencoders—or VAE—generative model, which relies on probabilities to find patterns in data, created the machine-generated clickbait headlines.

The researchers tested their algorithm against top-performing systems from Clickbait Challenge 2017, an online clickbait detection competition.

Additional researchers from Penn State and Arizona State University contributed to the work. The National Science Foundation, Oak Ridge Associated Universities, and the Office of Naval Research supported this work.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Technological Transformation

Share:
The Big Picture
Explore and monitor how Digital Communications is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

5 ways to achieve effective cyber resilience

Filipe Beato and Jamie Saunders

November 21, 2024

Why AI is Southeast Asia's new engine for profitable growth

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum