Forum Institutional

Why so much harmful content has proliferated online - and what we can do about it

How do we start to clean up the internet?

How do we start to clean up the internet? Image: Unsplash.com

Farah Lalani
Global Vice President, Trust and Safety Policy, Teleperformance
Cathy Li
Head, AI, Data and Metaverse; Member of the Executive Committee, World Economic Forum
This article is part of: World Economic Forum Annual Meeting
  • The amount of harmful content online is increasing rapidly.
  • Obfuscation and lack of a common approach are barriers to progress.
  • A new initiative aims to foster collaboration across the media sector.

The internet has made our lives easier in many ways. We are able to purchase items online and have them delivered almost immediately. We can find people who like that same rare dog breed as us, and share an endless number of photos with them on Instagram. We can react to content – be it funny memes or breaking news – in real-time.

But the same frictionless experience most of us enjoy for education, entertainment or connection with others has also been leveraged by those looking to do harm - and the internet does not discriminate in the speed, reach, access and efficiency it provides to all users.

Content moves freely and abundantly online. Every minute, 500 hours of video are posted to YouTube and 243,000 photos are uploaded on Facebook. Unfortunately, this proliferation has also applied to harmful content. In the past year, the number of reports of child exploitation images circulating online has doubled to 45 million. On Facebook alone, 11.6 million pieces of content on child nudity and sexual exploitation of children were removed in Q3 of 2019, a substantial increase on the previous quarter. Harassment and bullying, terrorist propaganda and the use of fake accounts to spam or defraud is also spreading across many sites.

Have you read?

It is hard to delineate how much of the increase in harmful content is attributable to a greater circulation of this type of content versus improvements in detecting and enforcing action on this content. Regardless, spaces online are being used by predators and other bad actors to accelerate illegal and harmful activity in an unprecedented way. Many have argued that this type of activity has always existed, but that the open web is just now uncovering it. However, digital disruption, which has created a frictionless user experience, and a shift toward advertising-based business models based on maximizing engagement, has made it quicker and easier for all types of content to reach a massive scale. But with so much technology and knowledge at our fingertips why haven’t we been more successful in ‘cleaning up’ spaces online?

One reason is that the problem itself has been obfuscated. Some tech executives are juxtaposing freedom of speech with a censored internet in a way that leads people to take an absolutist stance on this topic. Accepting the argument that noxious content online needs to be weathered out as a test of our willingness to uphold free speech means that those who are responsible can avoid taking action and continue with business as usual. As Berkeley law professor, John A. Powell stated in this New York Times article, “We need to protect the rights of speakers, but what about the rights of everyone else?”

Discover

What is the World Economic Forum doing about improving online safety?

While private companies are not bound by the First Amendment, most of us still agree on the importance of upholding free expression in public digital spheres. But this is where the conversation should begin rather than end. To advance this important dialogue, we also need to recognize that people may not infringe on another's human rights in the name of free speech. As just one example of the real-world implications of digital harm, a report by the US Department of Justice identified sextortion as the most significantly growing threat to children, and quoted an FBI study which found that more than a quarter of sextortion cases led to suicide or attempted suicide. The way the problem has been framed so far presents a misleading and dangerous false choice; this is not about threatening free speech, but about valuing all protections with appropriate measures so that people are not subjected to harm in the name of free speech. This will help ensure that everyone can feel safe and have a voice in the long run.

The other major difficulty in addressing this problem is the lack of a common approach, terminology, and understanding of the trade-offs when it comes to harmful content online. Terms like hate speech and fake news have homogenized a content problem with stark differences. The responsibilities across the public and private sectors, the expected timeline for taking action, and the risk it poses to the public vary quite substantially based on the type of content. Each platform has their own categories and transparency metrics; each consumer brand which advertises on the platform has their own risk settings regarding which content they want their products advertised alongside. While consistent usage of terminology and reporting across the media ecosystem will be challenging, areas where there is more consensus, such as child exploitation and extremist content, can be starting points from which to tackle harmful content more collaboratively.

The problem at a user level is also complex. Given that the vast majority of adults are concerned with how companies are using the data they collect about them, most would likely prefer to use services with greater privacy and encryption. However, estimates show that the number of child sexual abuse reports made through CyberTipline would halve with end-to-end encryption. The trade-off between privacy and detection of harmful content has not been made explicit to consumers. Regardless, users do not have the spending power to influence decisions on platforms where most of the revenue is driven by advertisers.

Loading...

The current media business models are not inherently bad from a market perspective. In fact, it is a win all round when all goes well for consumers (who get a free service), brand advertisers (who get reach), platforms (who earn revenue) and content creators (who get funded). However, when this engagement is exploited by those looking to create or share harmful content, the risk to society far outweighs the market-efficiency benefits. An initiative called the Global Alliance for Responsible Media has been created to help drive uncommon collaboration across the media industry, recognizing that brands’ advertising dollars are the primary funders of content creators on platforms – and that with this comes both the responsibility and power to help drive change.

Whether it be the horrific terror attacks livestreamed in Christchurch or the depraved murder of a student uploaded online a decade ago, the implications of harmful content on society are significant. As our digital and physical worlds continue to collide, our online safety - based on the content we create, see, and share - will become our personal safety, full stop.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Internet Governance

Related topics:
Forum InstitutionalFourth Industrial Revolution
Share:
The Big Picture
Explore and monitor how Internet Governance is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Forum Stories: A new home for ideas, solutions and analysis on the world's biggest issues

Gayle Markovitz and Vesselina Stefanova Ratcheva

November 21, 2024

The mindset change businesses need for a climate-resilient future

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum