Forum Institutional

This is why we need to talk about responsible AI

people working at a table with their laptops out, talking about responsible AI

This is how to join the conversation on AI, ethics, bias and trust. Image: Photo by Headway on Unsplash

Steven Mills
Partner and Chief Artificial Intelligence Ethics Officer, Boston Consulting Group (BCG)
Daniel Lim
Senior Director, Scenarios, Salesforce
This article is part of: Pioneers of Change Summit
  • Responsible AI is designed to help recognize, prepare for, and mitigate potential harmful effects of AI. So why isn’t anyone talking about it?
  • Nearly 50% of organizations report having a formalized framework to encourage considerations of ethics, bias and trust.
  • Here are 5 tips on how your organization can join the conversation on Responsible AI.

Bias in AI and other negative consequences of the technology have become common media fodder.

The impression that media coverage gives is that only a few companies are taking steps to ensure the AI systems they develop aren’t inadvertently harming users or society.

But results from an IDC survey show that many companies are moving towards Responsible AI. Nearly 50% of organizations reported having a formalized framework to encourage considerations of ethics, bias and trust.

Discover

What is the World Economic Forum’s Jobs Reset Summit?

But why are so few companies pulling back the curtain to share how they are approaching this emerging focus? The silence is puzzling given the commitment to the responsible use of technology these investments signal.

Work in progress

Responsible AI is still a relatively new field that has rapidly developed over the past two years, with one of the first public guidelines for implementing Responsible AI in 2018.

Yet, only a few companies are publicly discussing their ongoing work in this area in a substantive, transparent, and proactive way. Many other companies, however, seem to fear negative consequences (like reputational risk) of sharing their vulnerabilities. Some companies are also waiting for a “finished product,” wanting to be able to point to tangible, positive outcomes before they are ready to reveal their work.

They feel it is important to convey that they have a robust solution with all the answers to all the problems relevant to their business.

We’ve also seen that willingness to be transparent varies by industry. For example, an enterprise software company that speaks regularly about bug fixes and new versioning may find Responsible AI to be a natural next step in their business. However, a company that monetizes data may worry that creating this kind of transparency will unearth greater stakeholders concern about the business model itself.

Through our conversations with companies, we’ve seen no one has conquered Responsible AI, and everyone is approaching it from a different angle. And largely, there is more to gain from sharing and learning than continuing to work towards perfection in silos.

All risk and no reward?

With so many news stories about AI gone wrong, it’s easy to keep the strategies under wraps. But it’s important to understand the reward of sharing lessons with communities.

First, talking openly about efforts to improve algorithms will build trust with customers—and trust is one of the greatest competitive advantages a company can have. Furthermore, as companies like Apple have proven, embracing a customer-centric approach that incorporates feedback loops helps build better products.

Making Responsible AI part of stakeholder feedback will not only help avoid reputational damage, but will ultimately increase customer engagement.

Making Responsible AI part of stakeholder feedback will not only help avoid reputational damage, but will ultimately increase customer engagement. Finally, the data science profession is still in its early stages of maturity. Models and frameworks that incorporate ethics into the problem solving process, such as the one published by researchers at the University of Virginia, are just beginning to emerge.

As a result, Responsible AI practices such as societal impact assessments and bias detection are just starting to make their way into the methodologies of data scientists. By discussing their challenges with their peers in other companies, data scientists and developers can create community, solve problems and, in the end, improve the entire AI field.

As champions of Responsible AI, we urge companies to lean into Responsible AI, engaging with peers and experts to share not only the wins, but also the challenges. Companies must work together to advance the industry and build technology for the good of all.

Image: World Economic Forum

5 ways to join the Responsible AI discussion

We’ve taken our conversations with corporate executives and through our participation in World Economic Forum Responsible Use of Technology project community, and distilled our learning into five areas where companies can help build transparency into their Responsible AI initiatives.

Create and engage in safe spaces to learn: Closed forums such as the World Economic Forum Responsible Use of Technology provide a safe, achievable step toward transparency—a place for companies to speak openly in a risk-free, peer-to-peer setting. Interactions with other companies can accelerate knowledge sharing on Responsible AI practices, and build confidence in your own efforts.

Engage your customers and community: Customer engagement and feedback builds stronger products. Adding Responsible AI to these dialogues is a great way to engage with customers in a low-risk, comfortable environment.

Be deliberate: You don’t need to go from “zero to press release.” Give your programme time to develop: Begin with dialogue in closed forums, speak with your employees, maybe author a blog post, then expand from there. The important thing is to take steps towards transparency. The size of the steps is less important. Taking this progressive approach will also help you find your voice.

Diversity matters: Engaging with stakeholders from diverse backgrounds is an essential step in the process of improving Responsible AI. Actively listening to and addressing the concerns of people with different perspectives throughout the design, deployment, and adoption of AI systems can help identify and mitigate unintended consequences. This approach may also lead to the creation of better products that serve a larger market.

Set the right tone: Cultural change starts at the top. Senior executives need to set a tone of openness and transparency to create comfort sharing vulnerabilities and learnings. Ultimately, this will ease organizational resistance to engaging in public dialogue about Responsible AI.

We are still in the early stages of Responsible AI, but we can make rapid progress if we work together to share successes, learning and challenges. Visit the WEF Shaping the Future of Technology Governance page to begin engaging with peers.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Artificial Intelligence

Related topics:
Forum InstitutionalEmerging Technologies
Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

Davos 2025: How to follow the Annual Meeting on our digital channels

Beatrice Di Caro

December 17, 2024

The other 51 weeks: what happens before and after Davos?

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum