Emerging Technologies

Let’s be clear: Why transparency is key to unlocking AI’s full potential

An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023: Transparency and responsible frameworks are essential for building trust in AI

Transparency and responsible frameworks are essential for building trust in AI. Image: REUTERS/Aly Song

Raj Sharma
Global Managing Partner, Growth and Innovation, EY
This article is part of: World Economic Forum Annual Meeting
  • Transparency and responsible frameworks are essential for building trust in artificial intelligence (AI), ensuring fair, safe and inclusive use to maximize its benefits.
  • Engaging diverse stakeholders and skills throughout AI design, development and deployment fosters collaboration, reduces bias and boosts organizational buy-in.
  • Transparent governance, proactive communication and algorithmic guardrails are critical to mitigating risks, promoting trust and unlocking AI’s potential to transform industries and lives.

Artificial intelligence (AI) is poised to significantly transform industries, businesses and individuals’ lives in numerous ways. It will improve health, safety and education. Additionally, AI can empower us to address global challenges such as inequality, poverty and climate change.

Yet, despite AI’s potential as a significant force for good, massive fears and distrust remain. Nearly half (49%) of US respondents to a YouGov survey admitted feeling concerned about the technology, while 22% said they were scared.

Furthermore, some media commentators have pointed to a growing backlash against AI, which could limit organizations’ ability to deploy time-saving and sometimes life-saving AI systems that transform our world for the better.

So, how can leaders address people’s concerns to lead AI transformation with confidence?

Have you read?

Adopting a transparent approach

I strongly believe that instilling confidence in AI during this era of widespread uncertainty and mistrust requires honesty and transparency from every player involved in implementation – an approach that, sadly, has not always been applied.

It’s also essential to put people – clients, employees and other stakeholders – at the heart of organizational AI strategies.

From my own experiences of working with clients, I’ve seen that a transparent approach to AI implementation requires leaders to:

1. Develop a framework that supports the responsible use of AI

Frameworks help ensure that AI is used responsibly – that is, fairly, safely and inclusively, with appropriate accountability, transparency and consideration of its environmental impact. When stakeholders believe an organization is using AI responsibly, they are more likely to trust its AI models.

Organizations can adopt external AI guidelines, such as the NIST AI Risk Management Framework or develop internal frameworks. EY has created the ethical and responsible AI principles, a framework used internally and externally with clients.

2. Proactively engage with stakeholders from the outset

Keeping humans in the loop in AI systems development is essential for two reasons. Firstly, rather than a top-down decision, stakeholders at every level must be involved. Secondly, people provide the essential monitoring and oversight that underpin transparency – they deliver the critical assurance that AI models are working as intended.

Being transparent about AI means being honest about what a system is intended to do.

When planning an AI implementation, it is vital to ask stakeholders about their pain points and how AI tools could address them. Upfront communication about what the system is intended to achieve and how it can aid people in their work will support the development of a robust business case; it will also help allay fears that AI might take their jobs.

3. Involve diverse skills in the design and build

AI tools will only be viewed as positive for society and organizational efficiency if diverse skills are involved in their design and build. A diverse development team is critical to mitigating the risk of bias – one of the greatest risks associated with AI systems.

Also, a diverse team can help the organization’s AI tools meet the needs and expectations of a broad range of stakeholders by bringing different experiences and perspectives to the table.

4. Get the ‘village’ to collaborate with implementation

Technically speaking, it takes a village to build a new AI system – made up of technological experts and project management talent. The same principle applies to deploying the system once it has been built.

The village spans the whole organization – from data scientists and business process owners to HR, change management and legal – to roll out the system and upskill the workforce in using it responsibly. Collaboration helps increase transparency around the new system and encourages high levels of employee buy-in.

5. Apply the principles of good governance

The board should help set the organization’s AI risk appetite and provide high-level scrutiny over its AI strategy. It should also ensure that AI is responsibly used and deployed and that all users of AI tools receive appropriate training.

Another good principle is an approval chain for each AI use case, which will allow for proper evaluation of the risks and opportunities and appropriate application of guardrails.

Loading...

6. Embed guardrails into AI models

Guardrails are integral to using responsible AI models. They can be regarded as algorithmic safeguards – a set of predefined filters, rules and tools designed to ensure that AI systems operate ethically and legally.

Robust testing of AI models before they are used can help highlight where risks and vulnerabilities lie so guardrails can be implemented. They can prevent hallucinations, for example and minimize the risk of AI models producing toxic outputs.

Why we must be transparent about AI

Being transparent about AI means being honest about what a system is intended to do, where it fits with the organization’s overall strategy, which benefits and pitfalls it brings and how it is likely to impact people. It also means being able to explain why an AI model makes the decisions it does.

Unfortunately, many AI implementations today are often clouded in mystery, with powerful solutions developed behind closed doors by a small number of stakeholders. As a result, people don’t trust the tools and resist using them, with detrimental consequences for our society.

Because of this resistance, we risk being held back from using AI technologies to transform businesses, grow economies and improve lives. That’s why we must be transparent about AI.

The views reflected in this article are those of the author and do not necessarily reflect the views of the global EY organization or its member firms.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Artificial Intelligence

Related topics:
Emerging TechnologiesBusiness
Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

How quantum computing is changing drug development at the molecular level

Georges-Olivier Reymond

January 3, 2025

Why the tech sector can't build a sustainable ecosystem for Large Quantitative AI models alone

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2025 World Economic Forum