Emerging Technologies

Closing the AI equity gap: Trust and safety for sustainable development

Distributed register technology global communications, background made of colour lines, circles and particles illustrating the need for trust and safety

Illustrative: Distributed register technology global communications, background made of colour lines, circles and particles. Image: Getty Images/iStockphoto

Keyzom Ngodup Massally
Head of Digital Programmes, Chief Digital Office, United Nations Development Programme (UNDP)
Jennifer Louie
AI Trust and Safety Expert, United Nations Development Programme (UNDP)
  • Artificial intelligence can benefit humanity and sustainable development, but empowering local AI ecosystems must be prioritized to build robust safety measures.
  • Trust and safety rules that classify certain regions or user profiles as posing a ‘risk’ can limit their participation in the digital economy.
  • Achieving and scaling a new model for AI trust and safety will require public and private partners to collaborate and adapt to the specific geopolitical, environmental and cultural nuances.

Closing the artificial intelligence (AI) equity gap includes ensuring AI models and solutions are responsive to local contexts in developing countries. There is an urgent need to reimagine the approach to trust and safety, from reactive and crisis-focused towards proactive, anticipatory and adaptive measures focused on people’s safety and inclusion.

AI is becoming an increasingly powerful technology that can benefit humanity and sustainable development, but empowering local AI ecosystems across all countries needs to be prioritized to build the tools and coalitions necessary to ensure robust safety measures. Here, we make a case for reimagining trust and safety, a critical building block for closing the AI equity gap.

Have you read?

How is trust and safety described currently?

"Trust and safety" is not a term that developing countries are always familiar with, yet people are often impacted by it in their everyday interactions. Traditionally, trust and safety refers to the rules and policies that private sector companies put in place to manage and mitigate risks that affect the use of their digital products and services, as well as the operational enforcement systems that determine reactive or proactive restrictions. The decisions that inform these trust and safety practices carry implications for what users can access online, as well as what they can say or do.

While some of these decisions are matters of legal compliance or safety, in other instances, they exist as independent policies that companies create to align with user experience and business interests. Companies tend to independently establish their trust and safety rules with limited reporting, transparency and local collaborations in developing countries.

The unintended risk of exclusion

Trust and safety decisions are designed into the fabric of users’ collective digital experiences. Although these filters are often intended as positive and protective measures, without holistic, rights-based considerations, they can become inhibitors that limit people’s ability to fully benefit from digital tools and services.

For instance, trust and safety rules that classify certain regions or user profiles as posing a ‘risk’ can limit their participation in the digital economy, leading to unintended consequences of exclusion. Here are a few scenarios users might encounter:

A ‘hold’ placed on funds

When making or receiving online payments, certain regions or transaction types may be classified as ‘high risk.’ A hold might be placed on the account to safeguard the money, temporarily restricting access to it. Prolonged waiting periods, however, can significantly interrupt daily lives, holding up paying employees or essentials like rent.

Delays to Google Maps listings

In certain regions, businesses attempting to list their company on Google Maps may experience an extensive approval wait time. This is because Google’s trust and safety team cannot easily authenticate new addresses in some countries and cannot easily verify business documents written in some local languages. Not having a business show up on Google search or Maps can impact attracting new business and receiving deliveries.

Blocked from online marketplaces

Vendors of products or services using online marketplaces, such as Airbnb or Alibaba, may be blocked from automated keyword filters, resulting in their listings being blocked. These vendors may often only be permitted to appeal this decision in English or use a less-than-reliable translation software, limiting their ability to contest or resolve the decision if it is a problem of over-enforcement. The inability to do business online can result in a significant loss of income.

While the rationale behind some of these trust and safety practices may be well-intentioned, there are cases where they have led to negative outcomes. Redressal mechanisms that are contextual and localized ought to be considered by companies as an extension of trust and safety implementation. This could mean, for example, using local data to assess and optimize trust and safety to reduce the duration and amount of a hold placed on funds, ensuring an effective feedback loop by integrating local languages.

Nevertheless, the protection and ability to challenge these decisions are not distributed equally. Companies tend to give users in their home markets and major economies greater power and agency in appealing decisions and shaping tech platform policies.

Though companies in developing countries are experiencing accelerated tech adoption, they often face significant barriers to receiving locally applicable interventions that prevent digital harm. This disparity is particularly evident when trust and safety taxonomies and risk frameworks are designed by companies in developed countries without fully accounting for local contexts, cultures and challenges.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

Image: UNDP

AI and inequity challenges

AI requires massive amounts of data to train its models. Access to this data depends heavily on a population’s access to technology – 5G, cloud storage and modern devices – and early access to these technologies accelerates content creation. This content then passes through companies’ trust and safety filters, which can remove content according to their rules and interests, before it is data scraped or fed into AI.

The responses from recent large language models (LLMs) and other generative AI have been shaped by early adopters in technologically advanced regions, whose feedback determines what constitutes flawed, unacceptable, or optimal outputs.

Early access to AI systems tends to be afforded to researchers, developers and users in technologically advanced countries. Their feedback – including what they flag as harmful, inaccurate, or inappropriate – shapes how many leading AI companies develop their safety protocols and content filters. This creates a cascading effect: later AI adopters, often in less technically developed countries, interact with versions that have been pre-conditioned by the trust and safety preferences of early adopters in developed nations. As a result, they see content filtered through others' biases and concerns.

Furthermore, unlike the trust and safety protocols for user-generated content, AI offers no appeals process. Many online platforms allow users to contest actions like blocking, suspension, or limited access, providing vital feedback for when systems are over-enforcing.

With AI, however, this crucial feedback loop is missing. People can report flawed or harmful AI outputs, but these systems are not designed to distinguish between fair and unfair reports – they simply suppress flagged content. Once content is hidden, even AI developers rarely encounter it again, making it nearly impossible to review or correct. This leads to over-enforcement, where content is permanently omitted and later adopters cannot review, contest, or influence these.

Creating more inclusive AI trust and safety models

The speed and adoption of AI today demand a new approach and expanded scope for trust and safety. Existing AI trust and safety models need to evolve to become more locally and culturally responsive to the different ways that AI harms can manifest and be grounded in the realities of different contexts.

AI is a critical enabler of sustainable development, but at the same time, trust and safety around AI use and adoption should not be overlooked. As the United Nations Development Programme (UNDP) advances its commitment to closing the global AI equity gap, a pivot to a more proactive re-envisioning of AI trust and safety that centres on equitable growth and is sensitive to local contexts is more important than ever.

Achieving and scaling a new model for AI trust and safety will require public and private partners to collaborate and adapt to the specific geopolitical, environmental and cultural nuances within local AI ecosystems.

Image: UNDP

Investing in re-imagining AI trust and safety models will contribute to an enabling environment where AI innovation can thrive, particularly in developing countries. Just as investment in traditional sectors requires a thorough risk assessment, AI development demands a comprehensive evaluation of safety, security and market risks within the operating ecosystem.

There is a critical need for new AI businesses in developing countries to receive locally relevant support for navigating AI trust and safety challenges as these directly impact organizational viability and growth. Without this support, trust and safety measures can hinder, rather than help success.

Trust and safety are foundational to developing healthy digital economies and achieving equitable AI futures, yet current models often exclude the voices of those most impacted by their limitations. Those best positioned to help design the next generation of trust and safety are the communities currently underserved by existing systems, particularly in regions where technical resources are scarce. Their experiences and insights are crucial for building frameworks that truly serve everyone.

The most innovative solutions to AI trust and safety escalations of the future should not and likely will not come from current institutional approaches. The most adept answers will come from diverse disciplines and local stakeholders, led by creative individuals intrinsically motivated by an ethics of care for their communities, informed by their proximity to and direct experiences with digital harms.

UNDP invites stakeholders, especially those from and working in developing countries, to engage in reimagining AI trust and safety. Together, we can create frameworks that protect and empower communities while enabling local innovation to thrive.

Alexander Hradecky, Project Manager, AI Hub for Sustainable Development, UNDP; Dwayne Carruthers, Digital Transformation Communications and Advocacy Manager, UNDP; and Romilly Golding, Communications Specialist, also contributed to this article.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Emerging TechnologiesFourth Industrial RevolutionEquity, Diversity and Inclusion
Share:
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Why we're heading back to the Moon - and on to Mars

Robin Pomeroy and Natalie Marchant

December 3, 2024

How Agentic AI will transform financial services with autonomy, efficiency and inclusion

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum