Cybersecurity

How can we prevent online harm without a common language for it? These 6 definitions will help make the internet safer

Digital safety requires a complex range of decisions.

Digital safety requires a complex range of decisions. Image: Unsplash/Ales Nesetril

Douglas Broom
Senior Writer, Forum Agenda
This article is part of: Annual Meeting on Cybersecurity

Listen to the article

  • Stopping online harm needs coordinated global action by regulators and tech firms.
  • But to work together, all the players need a common language to describe the problems.
  • Now the World Economic Forum’s Global Coalition for Digital Safety has produced a foundational language to define online harms.

How often have you experienced something harmful online? And, if you have, do you know how to describe what you’ve encountered in a way that would allow internet companies and regulators to take action to protect others?

People encounter online harms more frequently than you might think. Research by the UK regulator Ofcom found that 62% of adults and an alarming 81% of 12-15-year-old internet users had at least one harmful online experience per year.

Adults mostly reported fraud, scams and fake news, but more than a quarter of younger users had experienced online bullying and unwanted friend requests. Almost two-fifths of teenagers said they had seen offensive language and just under a quarter had been the victims of trolling.

Statistics displaying adults and children's concerns on the internet.
Over 80% of 12-15-year-old internet users have at least one harmful online experience per year. Image: Ofcom

The essence of the online world is that it is global, which creates the potential for miscommunication and misunderstanding between people in different nations and cultures. What’s needed are clear definitions of what constitutes harmful content.

The European Union’s Digital Services Act (DSA), which comes into full effect in February 2024, goes some of the way with its stated aim to create a safer digital space where the fundamental rights of users are protected

But, globally, the absence of agreed definitions makes it hard to take multi-agency and cross-border action. Which is why the World Economic Forum’s Global Coalition for Digital Safety has produced a Typology of Online Harms, which defines harmful content in six distinct categories.

“The DSA stands out as one of the recent regulations worldwide that aim to shape a safer digital ecosystem,” said Agustina Callegari, the Coalition’s Project Lead. “By holding platforms accountable and promoting proactive measures, it sets the standard for online safety.

“The Global Coalition for Digital Safety has recently launched the high-level framework for understanding and assessing digital safety risks that can help stakeholders navigate new regulatory requirements.”

The “common foundational language” that the Coalition has defined recognizes that “while the internet is global in nature, harms can be highly local or context-specific: unique risks may arise in different countries or regions or in different communities.

“Within this context, it is important to acknowledge that digital safety requires a complex range of deliberations, balancing legal, policy, ethical, social and technological considerations … decisions must be rooted in international human rights frameworks.”

Here are the Typology’s six categories and summaries of the definitions that sit within them:

1. Threats to personal and community safety

Child sexual abuse material

The Typology suggests this definition is clearer and less open to confusion than the term ‘child pornography’ which is widely used in many countries’ laws.

Child sexual exploitation material

Defined as content that sexualizes children, whether or not it involves actual sexual abuse.

Pro-terror material

Content that advocates, promotes or encourages terrorist acts or provides instruction on how to carry them out. Praising terrorism also falls within this definition where it might have the effect of leading a person to engage in a terrorist act.

Content that praises, promotes, glorifies or supports extremist organisations or individuals

Content intended to recruit people to violent extremist or criminal organizations including the use of symbols, logos, flags, slogans, uniforms, salutes and music.

Violent graphic content

Content that promotes or provides instruction or depicts acts such as murder, torture, rape and kidnapping. The Typology qualifies this definition with the need to exempt material documenting human rights abuses or which condemns such acts.

Content that incites, promotes or facilitates violence

Includes content that contains direct and indirect threats of violence and intimidation.

Content that promotes, incites or instructs in dangerous physical behaviour

Activities which, if conducted in a non-professional context, may lead to serious injury or death for the user or members of the public.

2. Harm to health and well-being

Material that promotes suicide, self-harm and disordered eating – includes anything that encourages or depicts self-harm, suicide, or encourages and coaches people to engage in disordered eating or unhealthy body measurement challenges.

Developmentally inappropriate content – includes providing access for children to pornography and violent material.

Discover

What is the Forum doing to improve online safety?

3. Hate and discrimination

Hate speech

Any kind of communication or behaviour that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of their inherent protected characteristics like religion, ethnicity, nationality, race, colour, ancestry or gender.

Algorithmic discrimination

A restriction that results in users being excluded from services such as lending, housing, insurance, education or employment opportunities. Algorithms can be tailored to even deny people basic necessities such as food and water. But the definition recognizes that certain practices, such as age restrictions, may be appropriate as protective measures.

4. Violation of dignity

Online bullying and harassment

The use of technology to bully someone to hurt them socially, emotionally, psychologically or even physically. This can include abusive texts and emails; hurtful messages, images or videos; spreading damaging gossip; or creating fake accounts to trick or humiliate someone.

Sexual extortion

Also called “sextortion”, the blackmailing of a person with the help of self-generated images of that person in order to extort sexual favours, money or other benefits from them by threat of sharing the material more widely.

5. Invasion of privacy

Doxxing

The intentional online exposure of an individual’s identity, personal details or sensitive information without their consent and with the intention of placing them at risk of harm.

Image-based abuse

Sharing, or threatening to share, an intimate image or video without the consent of the

person shown. An “intimate image/video” is one that shows nudity, sexual poses, private

activity but can also include showing someone without the religious or cultural clothing they would normally wear in public.

Have you read?

6. Deception and manipulation

Disinformation and misinformation

The Typology distinguishes between misinformation – spreading incorrect facts – and disinformation which it defines as the deliberate and intentional spreading of false information designed to mislead others. The definition includes gendered disinformation that specifically targets women political leaders, journalists and other public figures to perpetuate stereotypes and misogyny.

Deceptive synthetic media

Content that has been generated or manipulated via algorithmic processes (such as artificial intelligence or machine learning) to appear as though based on reality.

Impersonation

Posing as an existing person, group or organization in a confusing or deceptive manner.

Scams

Dishonest schemes that seek to manipulate and take advantage of people to gain benefits such as money or access to personal details.

Phishing

The sending of fraudulent messages, pretending to be from organizations or people the receiver trusts, to try to steal details such as online banking logins, credit card details and passwords.

Catfishing

The use of social media to create a false identity, usually to defraud or scam someone. People who catfish often make up fake backgrounds, jobs or friends to appear as another person. Catfishing is sometimes used to trick someone into an online romance before asking for money, gifts or nude images.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Cybersecurity

Related topics:
CybersecurityEmerging TechnologiesWellbeing and Mental Health
Share:
The Big Picture
Explore and monitor how Cybersecurity is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

5 ways to achieve effective cyber resilience

Filipe Beato and Jamie Saunders

November 21, 2024

We asked 6 tech strategy leaders how they're promoting security and reliability. Here's what they said

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum