Cybersecurity

Digital harm is on the rise – here's how we can give victims a pathway to justice

A hooded man holds a laptop computer as cyber code is projected on him in this illustration picture taken on May 13, 2017. Capitalizing on spying tools believed to have been developed by the U.S. National Security Agency, hackers staged a cyber assault with a self-spreading malware that has infected tens of thousands of computers in nearly 100 countries. REUTERS/Kacper Pempel/Illustration - RC18F229E2D0

New forms of digital harm require new forms of redress. Image: REUTERS/Kacper Pempel

Sheila Warren
Chief Executive Officer, Crypto Council for Innovation
Kirstine Stewart
Founder, Media Mughals Inc.
Kay Firth-Butterfield
Senior Research Fellow, University of Texas at Austin
  • Disinformation in the digital sphere is no longer just a political phenomenon.
  • As more people – especially women – face new forms of digital harm and abuse, we need new forms of redress for victims.
  • The World Economic Forum is exploring cohesive, systemic mechanisms that consider the rights, responsibilities and duties of various actors in the data ecosystem.

From doing business to staying informed of the news, we rely on access to information 24/7. But what happens when that information is incorrect – or worse, targets us in an unjust manner with real-world consequences?

We often talk about disinformation as a political phenomenon, but it is becoming increasingly more direct. Malicious actors who instrumentalize hate, distrust and political divisions in the form of disinformation have colonized our digital spaces in order to negotiate and assert their own societal values, creating new forms of harm. Whether this harm emerges from deep-fake videos, defamatory content or inaccurate information spread by administrative error, it is becoming increasingly difficult to remedy, since at present there are limited ways to seek justice in a globally digitized world.

Have you read?

There are a range of reasons why justice is hard to come by, including technical architecture, confusion over jurisdictions and market interests, to name a few. But if you need to seek recourse for a digital harm, do you know where to start? When a crime takes place, one would normally raise the issue to law enforcement. However, given how opaque responsibility is for any given data service – let alone for the creation of defamatory content online – does law enforcement really have the capabilities to uncover the perpetrator? What if you experience the harm in a different jurisdiction from your home country? Would the same rules apply? It is a daunting and often expensive process to fix the problem, if that is even possible, let alone to seek redress.

No longer just a political phenomenon

It’s not surprising that dis/misinformation has shifted beyond the political. The infodemic surrounding COVID vaccinations is a case in point.

Compounding this issue is the lack of access to vaccines and structural barriers, which are also negatively impacting vaccination rates in communities of color. According to the Centers for Disease Control and Prevention, across 43 U.S. states, the percentage of White people who have received at least one COVID-19 vaccine dose (38%) was 1.6 times higher than the rate for Black people (24%), and 1.5 times higher than the rate for Hispanic people (25%) as of April 26, 2021. Why is this the case?

The default narrative is that Black people, and communities of color more widely, are more vaccine hesitant. Such narratives cite the 1930s Tuskegee Syphilis Study, during which healthcare practitioners withheld medical treatment from Black men with syphilis, as evidence of existent mistrust that might stymie COVID vaccination rates in the Black community.

Despite the low numbers of Black and Hispanic people getting vaccinated, however, an April ABC News/Washington Post poll found that intent to get vaccinated actually rose across communities of color since January, with the steepest rise among Hispanic respondents (+16 points) and Black (+11 points) respondents, followed by White (+5 points) respondents. Hispanic respondents (81%) were most likely to say they were already vaccinated or were inclined to get vaccinated, followed by Black (75%) and White (72%) respondents.

A new report by the Center for Countering Digital Hate identified 12 key individuals – the “disinformation dozen” – whose accounts across all major social media platforms exploit the default narrative that the lower rates of vaccinations in communities of color is due to vaccine hesitancy in order to spread conspiracies and lies about the safety of COVID vaccines. Individuals who circulate this kind of disinformation are, of course, assisted by powerful algorithms that prioritize posts more likely to go viral. In fact, one study found that false content is 70% more likely to be reshared than true content.

Growth in anti-vaccination social media accounts, 2018-2021
Vaccine disinformation and misinformation is on the rise. Image: BBC/Crowdtangle

The previous example, however, suggests that accurate information can be equally as harmful as inaccurate information depending on the context, person or service responsible for its proliferation. The default vaccine hesitancy narrative is, in fact, historically true to some extent. However, it does not demonstrate the magnitude of unequal access to healthcare for communities of color. This one-sided narrative is thus being weaponized by malicious actors who have ill intent to cast blame and shame on these communities.

By the numbers

We need clear pathways to justice for individuals and groups being harmed online and via technology. With inadequately regulated data services and products, the laws and rules that have been established in the physical world are not reflected in the digital world. Consequently, the public has little to no visibility into who is behind intentionally harmful digital behaviors.

This has enabled familiar forms of physical abuse – bullying, gender-based violence, stalking, sexual assault, elder abuse, human trafficking – to multiply at uncontrollable scales. For example:

  • 2 in 5 women experience online sexual harassment.
  • 1 in 12 U.S. adults are victims of nonconsensual pornography (aka revenge porn).
  • 96% of deep fakes are nonconsensual pornography targeting women without their consent.
  • There has been an 86% increase in image-based abuse in Australia since COVID lockdown measures were instated.
  • 59% percent of teens report being harassed or bullied online.
  • 48.7% of LGBTQ students experienced cyberbullying in a given year.
  • The most targeted age group by cybercriminals are people 60 and above.

Sources: EndTAB and UN Women

The current system rewards impunity, and bad actors continue to thrive in the lucrative slander-profiteering industry. One reputation management website generated $2 million a year in revenue. Their mandate is to aid in the proliferation of harmful content on any given individual and provide expensive services to help remove it. On reputation management website RepZe, post removals from “cheater websites” start at $1,000, and negative press article removals start at $3,500.

Without action, the severity of these technology-enabled abuses will prevail. We can either act now – or wait until these issues inevitably directly target us and the people we care about.

The current policy landscape

Insufficient data protections, limited transparency across the data value chain, unchecked algorithms and scant resources for victims have produced only partial legislative responses at the federal level and disjointed state laws. Most developed countries have adopted broad legal protections for personal data, but the United States – home to some of the largest information technology companies in the world – continues to act haphazardly with fractured and sector-specific regulations that fail to effectively protect victims. Individuals suffer by not having at their disposal clear pathways to restoration, while bad actors and perpetrators of digital harms are free to act with impunity.

The last thing we need is a convoluted recourse system that creates more barriers for victims, particularly those without access to expensive legal representation. We need cohesive, systemic mechanisms that consider the rights, responsibilities and duties of the various actors in the data ecosystem. Failing to do so will perpetuate a revictimization pattern wherein the victim is forced to relive their trauma through the process of providing evidence and proving their victimhood at various junctures of seeking redress. The normalization of placing the burden of work on victims needs to stop. Solutions must hold the perpetrators accountable and quash victim-blaming narratives.

Left unaddressed, this problem will continue to grow more severe and at uncontrollable, unprecedented scales. We need to act now.

The World Economic Forum’s Global Future Council on Data Policy is leading a multistakeholder initiative aimed at exploring these issues, Pathways to Digital Justice, in collaboration with the Global Future Council on Media, Entertainment and Sport and the Global Future Council on AI for Humanity. To learn more, contact Evîn Cheikosman at evin.cheikosman@weforum.org.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Cybercrime

Related topics:
CybersecurityIndustries in DepthEmerging Technologies
Share:
The Big Picture
Explore and monitor how Digital Identity is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How emotional intelligence is the best defence against GenAI threats

Öykü Işık and Ankita Goswami

November 15, 2024

1:46

Mexico is the target of more than half of cyber threats reported in Latin America

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum