Emerging Technologies

How global tech companies can champion ethical AI

hand holding plasma tech orb

Responsible AI is a critical challenge for the tech industry. Microsoft has a model for companies to implement it. Image: Ramón Salinero/Unsplash

Tim O'Brien
General Manager of AI Programs, Microsoft
Steve Sweetman
Director, Microsoft
Natasha Crampton
Chief Responsible AI Officer, Microsoft
Venky Veeraraghavan
Group Program Manager, Microsoft
This article is part of: World Economic Forum Annual Meeting
  • The ethics of artificial intelligence, or AI, is a critical challenge for the global tech industry.
  • The implementation of robust, ethical AI practices requires diverse knowledge and perspectives.
  • This article is part of a series on tech ethics including how to build the right culture and getting AI right.

In the past two years, we’ve seen a dramatic increase in tech industry discussion and discourse about the ethics of artificial intelligence (AI) and the role of ethical principles in shaping how we use and experience it. The research community has also followed this track, with a pronounced increase in published papers on everything from machine learning bias to explainability to security implications.

This is not new. As Data & Society’s Jacob Metcalf notes, both the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE) published ethics guidelines for computer scientists in the early 1990s, and more recently, we’ve seen countless social scientists and STS researchers sounding the alarm about technology’s potential to harm people and society.

Have you read?

While some are quick to categorize AI as just the latest disruption in an industry with a history of disruptions, the speed with which AI is changing the world puts it in a different class. This, combined with an escalation of risk, has led our industry to finally heed the advice of seasoned domain experts, by incorporating ethical considerations into technology design, development, deployment and use before technologies are brought to market.

But this increase in activity has led to accusations of “ethics washing,” a pejorative term to describe the practice of exaggerating an organization’s interest in ethical principles to bolster its public perception. This leads to understandable questions about “action.” What are companies doing to convert principled claims into real governance? More importantly, how are they implementing these practices into corporate cultures that have, for the most part, never before been asked to consider them a required element of the product lifecycle?

While a number of tech companies have been developing new initiatives to integrate ethics into the design and deployment of products, many are reluctant to speak publicly about these efforts because they are so nascent. However, the urgency of these challenges demands companies share as we learn so everyone benefits, since we're all struggling with similar challenges.

As part of the World Economic Forum community of “ethics executives” from 40 companies, most in newly created roles or offices, leaders across the industry have expressed that their companies lack even a basic framework for grappling with how their products are designed and whom they should be sold to – and, in absence of a systematic approach, many are defaulting to reactive one‑off decisions.

Discover

What is the World Economic Forum doing about the Fourth Industrial Revolution?

At Microsoft, we’ve spoken publicly about our plans to implement a robust Responsible AI governance process, and it’s currently under way. While we’ll have much more to share as we progress, one lesson we’ve learned early on is about the importance of a pivotal role in making governance processes and practices real: the Responsible AI Champ.

The Responsible AI Champ

At Microsoft, a “Champ” is, quite simply, a domain expert who is available to fellow employees in a given geography and/or work group for awareness, advice, assistance and escalation. For years, we’ve had Champs for security, competitive products, open source and a number of other domains.

This is so important for ethics for two primary reasons. First, tech ethics is a new subject to most employees in most roles. Second, this domain requires a distributed and diverse presence of knowledge and facilitation at the organizational level.

The first reason is well documented. The absence of formal university ethics curriculum outside of medical schools, law schools and philosophy departments is, thankfully, now being addressed by leading technical institutions including Stanford, MIT, Markkula Center at Santa Clara University, Harvard, Cornell and others – but people already in the tech workforce are on a steep learning curve. At Microsoft, Champs facilitate the learning curve, as steep as it is, with an introduction to core concepts, decision-making frameworks and processes to discern how to assess the definition of harm, the likelihood and magnitude of exposure to harm and the severity of its impact.

The second reason is equally daunting: change management in the form of helping people in a mature, global business incorporate ethical principles into each and every facet of an existing product lifecycle, including solution development and sales.

We often rely on historical analogs to frame our approach to these challenges, if there are sufficient parallels. And security and privacy are two analogs that loom large at Microsoft. In the case of security, the catalyst for what became a deeply embedded change in product culture was Bill Gates’ 2002 Trustworthy Computing memo, which put in motion changes to our software development process, which eventually became the Microsoft Security Development Lifecycle, or SDL. The role of privacy was motivated by a more recent catalyst, the EU’s General Data Protection Regulation (GDPR), which Microsoft committed to adopt for customers worldwide. In both analogs, there was a clear defining moment which brought clarity to the path forward, followed by a sustained effort to get to desired outcomes.

Values AI needs to respect
In The Future Computed, Microsoft says these six principles should guide the development of AI. Image: Microsoft Corporation

Responsible AI is a multi-year journey, beginning with Satya Nadella’s 2016 op-ed on the need for an ethical framework for AI, followed by the 2017 formation of the AETHER Committee, the 2018 publishing of The Future Computed and several other steps to cement Microsoft’s commitment to innovating responsibly. In this context, the Champ’s role is to build upon this foundation and serve as an internal advocate and evangelist for Responsible AI, facilitating the sustained effort to fortify its role in our product and sales culture.

Specifically, a Responsible AI Champ has five key responsibilities:

  • Raising awareness of responsible AI principles and practices within teams and workgroups
  • Helping teams and workgroups implement prescribed practices throughout the AI feature, product or service lifecycle
  • Advising leaders on the benefit of responsible AI development – and the potential impact of unintended harms
  • Identifying and escalating questions and sensitive uses of AI through available channels
  • Fostering a culture of customer-centricity and global perspective, by growing a community of Responsible AI evangelists in their organizations and beyond

The Champ has historically been a functional role at Microsoft. Champs have full-time jobs and serve as Champs due to domain interest and/or leadership in a given area. Their backgrounds are varied and multi-faceted – especially important in a space spanning technology, business, social science and law. For example, data scientists and engineers who understand the discipline and can speak the language of the workgroup could serve as Champs in a technical team, but with a working knowledge of social science to bring a much-needed perspective on potential societal impact. In a sales team, however, customer-facing roles deal with very different issues, and the Champ serves as an advisor on potential sensitive uses in various requests for proposal we receive from prospective customers. The diversity provides another important benefit as Champs bring issues, perspectives and provide input to the AI ethics committee and decision makers.

Seniority is less important than a passion for and commitment to strengthening our ability to make definable, repeatable business practices in support of Responsible AI a permanent part of our culture.

We’ll share more as we learn, but it’s already clear that implementation of a Champs program is pivotal to the success of any governance effort. The continued movement from principles to action requires a change of culture across every phase of the product lifecycle – and a sustained effort at the workgroup level to ensure continued progress and learning.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Emerging Technologies

Related topics:
Emerging TechnologiesForum InstitutionalFourth Industrial Revolution
Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

5 ways to achieve effective cyber resilience

Filipe Beato and Jamie Saunders

November 21, 2024

Why AI is Southeast Asia's new engine for profitable growth

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum