This is why you need to know more about digital trust and safety
A range of tech companies have come together to create the Digital Trust & Safety Partnership Image: REUTERS/Lucy Nicholson
Listen to the article
- As crucial as trust and safety are to our experience on digital platforms, this work is largely unknown outside a narrow network of experts.
- The teams, processes, and technologies that companies use to address harms receive relatively little public attention.
- If we document and share the what, how and why of companies' adopted trust and safety practices, we can make the internet a safer place.
Everyone agrees we need a safer internet, but few agree on what that means. Tech companies face the competing pressures of removing or reducing the spread of harmful content without suppressing online innovation and expression.
Even a cursory look at the many examples of content moderation dilemmas shows there is no single solution to digital trust and safety. While large technology companies are currently in the global spotlight, no digital product or service is immune from harmful content or conduct. As the content moderation scholar Evelyn Douek says: “If you’re going to have users generating content, you’re going to have users generating harmful content.”
Yet that’s no reason for complacency; there are other means of improving the quality of our online experience. In other areas, from risk management to cybersecurity, best practices and assessments have been crucial to maturing and organizing fields within the tech industry and boosting safety for consumers. We must do the same for trust and safety.
This is why a wide range of tech companies have come together to create the organization I lead, the Digital Trust & Safety Partnership (DTSP). Formed to pinpoint best practices for trust and safety, the partnership enables companies to address content and conduct-related risks across an array of services and business models.
This approach has been a long time coming. Many companies have built trust and safety teams to address harmful online content and conduct. These professionals design, build and oversee both digital tools and human-led processes to make tough decisions about what is allowed on their products and services.
As crucial as trust and safety are to our experience on digital platforms, this work is largely unknown outside a narrow network of experts. The teams, processes and technologies that companies use to address harms receive relatively little public attention.
The advent of organizations like DTSP and the Trust & Safety Professional Association, which bring together the people working in this field, show that companies are pulling back the curtain to reveal this previously obscure part of their business.
Initiatives such as the World Economic Forum’s Global Coalition for Digital Safety serve as a crucial mechanism for advancing public-private cooperation on tackling harmful activity online by involving experts from across governments, civil society, academia and business.
Together, these organizations are working to outline the underlying principles and responsibilities for governing digital safety between the public and private sectors; and to outline new measures and international standards to promote a safe and trusted online environment for all. But what does elevating digital trust and safety entail?
1. Focus on the process to be ready for tomorrow’s content crisis today
Before the rise of ISIS, only a small number of experts were focused on terrorist use of the internet. The same was true of medical misinformation before COVID-19 and election interference before the US and other elections in 2016. Not every company can dedicate specialists to these issues, but it is possible for companies of all sizes to develop processes robust enough to address new and evolving challenges.
This involves having processes to anticipate risks, adopting appropriate and understandable policies to address them, enforcing those policies, adjusting and improving in response to developing challenges, and being transparent.
Getting ahead of the curve by developing processes and controls that can respond to the myriad ways a product could be misused to cause harm will help create more inclusive, safe and trustworthy online communities.
2. Shift away from one-size-fits-all approaches
The digital economy is diverse, encompassing some of the world’s largest social media platforms and small startups that may become the next tech giants. We need trust and safety practices that can accommodate companies of different sizes with their own values and products and services with safety risks of varying severity and likelihood.
As DTSP companies conduct internal reviews of their practices, we have published our approach to tailoring these assessments proportionately to their size and risk profile. Put simply, companies should consider their size and staffing, as well as their user volume and the risks raised by their products and services, to determine the right level of scrutiny.
3. To achieve trust, verify
There is no substitute for independent, objective and measurable assessments. Industry must take the lead because trust and safety practitioners inside companies are confronting such challenges daily and have lines of sight into areas where others may not. The public and policy-makers would be right to be sceptical of a company-only process, hence the need for external assessors, which is exactly what DTSP is working toward.
The latest Freedom on the Net report states that at least 48 countries have pursued new rules for tech companies in the past year, half of which look at how platforms treat content.
The reality is that this wave of regulations will prompt a generational shift in how content risks are handled. This could take the form of a cacophony of conflicting requirements that inadequately address real risks while giving rise to unintended consequences for human rights and economic opportunities alike.
The alternative is a world in which a global architecture of industry standards informs and aligns with national-level regulations. This approach would be backed by an objective accountability mechanism developed by practitioners in close consultation with governments and experts from academia and civil society.
Just as cybersecurity threats constantly change, so too will harmful content and conduct, driven by human behaviour. Despite this, when companies document and share much more of the what, how and why of their adopted practices, the result will be a safer and more trustworthy Internet for consumers.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Media, Entertainment and Sport
Related topics:
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.