How we banished bad bots from digital Davos
This year, custom-built software tracked suspicious patterns in real time to weed out digital propaganda. Image: Reuters/Kacper Pempel
The World Economic Forum publishes a lot of content, and never more so than during the Annual Meeting in Davos, the one week of the year when our reach multiplies substantially. At the same time, we become a magnet for all kinds of anti-globalization sentiment, misinformation campaigns and digital smears. Digital propaganda and bots are a real and significant part of that threat, an unpredictable and fast moving risk with a major impact.
For Davos this January, we custom-built software to track malicious bots in real time.
The software uses a number of techniques to identify malicious bots, developed through studies carried out since 2013 on the functioning of botnets on social media.
An account that looks suspicious is analysed on several dimensions: how many followers does it have? How many accounts is it following? What is the username, the user’s nickname, name and surname? What's the email attached (if it's public)? What can be gleaned from a small sample of posts?
If there are any unusual patterns, the software picks out those that could be malicious bots.
For example, some botnets are recognizable due to usernames that contain a name plus a clutch of numbers (six or more digits), e.g: mike282811. Alternatively, the software can compare the authors of the tweets with a database of malicious bots, populated using open-source intelligence techniques. The database comprises malicious bots divided by categories (such as bots that share malware, bots that talk about politics or bots that share links to counterfeit goods).
Suspicious accounts are also compared with other social networks, because in many cases, it has been shown that the creators of malicious botnets build the same fake profiles on different platforms.
A team composed of tech and communication experts do a risk assessment on each alerted message, and then decide on appropriate actions.
During the Annual Meeting, we detected hundreds of bots that were sharing propaganda and disinformation messages. In some cases, bots shared web addresses containing malicious code for the purpose of mining cryptocurrencies from individuals' personal computers. In other cases, thanks to the databases of suspicious users, we identified accounts that had already been reported as bots by third parties.
Thanks to a good spirit of collaboration between businesses, institutions and professionals with different specialities, we were able to guarantee security to those from all over the world who wanted to read and join in the meeting in Davos in a transparent and constructive way.
It feels like a victory over misinformation, that even in a world plagued by fake news, a global online event can still be pulled off free of a major digital propaganda event.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Digital Communications
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on CybersecuritySee all
Kate Whiting
December 12, 2024