Forum Institutional

How Singapore is developing trustworthy AI

Davos 2023: This image shows a robot playing a piano, illustrating the importance of creating trustworthy AI

Creating trustworthy AI is crucial now that AI is being widely used Image: Unsplash/Possessed Photography

Yeong Zee Kin
Deputy Commissioner , Personal Data Protection Commission, Singapore
Lee Wan Sie
Director, Development of Data-Driven Tech, Info-comm Development Authority of Singapore (IMDA)
Tan Wen Rui
Manager, AI Governance, IMDA

Listen to the article

  • Artificial intelligence (AI) now touches nearly every aspect of our lives, at home, at work and at play.
  • People are wary of how AI technologies use and apply the data they collect.
  • Singapore is developing an AI governance testing framework and toolkit that enables industries to demonstrate their deployment of responsible AI through objective technical tests and process checks.

Artificial intelligence (AI) is becoming ubiquitous. It underpins all kinds of functions, including critical ones. It impacts our work, life and play. It is used, for example, in medical imagery to detect severe illnesses and, through facial recognition, to unlock a smartphone.

There are instances, however, where the AI models output does not perform as intended. When the AI model is not trained and tested against representative datasets, for example, there can be bias against certain populations.

With greater maturity and more pervasive adoption of AI, society needs to be assured that AI systems are fair, explainable and safe and that the companies deploying them are accountable and transparent about their use of it. So, how can companies demonstrate their implementation of responsible AI in an objective and verifiable way to their stakeholders?

Have you read?

Voluntary AI self-assessment

In Singapore, we believe that voluntary self-assessment is a start. The Info-Communications Media Development Authority (IMDA) and Personal Data Protection Commission (PDPC) have taken the first step by developing A.I. Verify. This is an AI governance testing framework and toolkit that enables industries to be more transparent about their deployment of AI through technical tests and process checks.

A.I. Verify has two components. Firstly, the governance testing framework specifies the testable criteria and the corresponding processes required to carry out the test. Secondly, the software toolkit conducts technical tests and records the results of process checks.

Loading...

Available as a Minimum Viable Product (MVP), companies can use A.I. Verify to demonstrate what their AI systems can do and what measures have been taken to mitigate the risks of their AI systems. The testing reports from A.I. Verify keep company stakeholders better informed of the capabilities of the AI systems they are interacting with. This process brings transparency and helps build trust in AI.

A.I. Verify does not set ethical standards. It validates companies’ claims about their AI systems vis-à-vis a set of internationally accepted AI governance principles that countries coalesce around and on which Singapore’s AI governance initiatives also stand. These principles, organized according to people's concerns about AI systems, are shown in the diagram below. For the MVP, we have selected an initial set of eight principles (shown in green). Security and Data Governance (including privacy) testing regimes are well-developed internationally and hence not included in the current pilot.

Achieving trustworthy AI is a global imperative, but more work needs to be done in developing best practices, standards and methods of verifying trustworthy AI.
Achieving trustworthy AI is a global imperative, but more work needs to be done in developing best practices, standards and methods of verifying trustworthy AI. Image: IMDA Singapore

Invitation to companies to pilot

With this MVP, Singapore invites companies to participate in the international pilot and hopes to achieve the following objectives:

  • To enhance the MVP to ensure it meets industry needs: We invite industries to provide feedback on the framework and the software toolkit so that A.I. Verify will meet the industry’s needs to build trust with their internal and external stakeholders.
  • To contribute to the development of international AI standards: Through industry participation in this pilot, Singapore aims to work with AI system owners and developers globally to collate industry best practices and build benchmarks. These, in turn, can be shared with international standards bodies as Singapore's contribution to international AI standards development.
  • To facilitate the interoperability of AI governance frameworks: IMDA is working with regulators and standards organizations to map A.I. Verify to established AI frameworks to build interoperability. This will help businesses that offer AI-enabled products and services in multiple markets to more efficiently meet the regulatory requirements of these markets.
  • To build an AI testing community together: Achieving trustworthy AI is a global imperative, but more work needs to be done in developing best practices, standards and methods of verifying trustworthy AI. Singapore hopes to continue engaging like-minded partners, such as industry, policymakers and civil society, to ensure the relevance of this initiative to individuals and organizations who are beneficiaries of AI.

As more AI system developers and owners use A.I. Verify, we can crowd-source and co-develop benchmarks for acceptable levels of adherence to AI governance principles for different sectors and use cases. We can also work with technology solution providers to identify gaps and development opportunities in current methods and technologies of testing AI models and devise better ways to verify trustworthy AI.

A key element for building trust is to communicate effectively to all stakeholders about the behaviour of AI applications. Stakeholders include regulators, board members, senior management, business partners, auditors, customers and consumers. They all need different information at varying levels of detail and insights to facilitate their decision-making. Hence, customised report templates addressing specific stakeholder information needs must be developed.

These goals cannot be achieved without the collective wisdom and efforts of the broader community. As part of community building, IMDA/PDPC will organize roundtables where industry can engage regulators for early policy thinking, as well as industry-specific workshops to develop consensus on industry benchmarks for trustworthy AI. These benchmarks can be shared with international standards bodies as the community’s contribution to the global discourse on standards building.

Discover

How is the World Economic Forum ensuring the responsible use of technology?

Conclusion

AI ethics principles may be universal, but often their interpretation and implementation are influenced by cultural and geographic variations. These variations can lead to fragmented AI governance frameworks, which, in turn, can raise barriers to implementing trustworthy AI for companies and hinder their capacity to provide AI-enabled products and services across borders.

A.I. Verify is Singapore’s first step to identifying and defining an objective and verifiable way to validate the performance of AI systems. The international pilot will facilitate the collation and development of industry benchmarks for AI ethics principles. We look forward to working with like-minded players in this learning journey and to piloting A.I Verify.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Tech and Innovation

Related topics:
Forum InstitutionalEmerging Technologies
Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Forum Stories: A new home for ideas, solutions and analysis on the world's biggest issues

Gayle Markovitz and Vesselina Stefanova Ratcheva

November 21, 2024

The mindset change businesses need for a climate-resilient future

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum