Fourth Industrial Revolution

4 steps to developing responsible AI

A customer takes a picture as robotic arms collect pre-packaged dishes from a cold storage, done according to the diners' orders, at Haidilao's new artificial intelligence hotpot restaurant in Beijing, China, November 14, 2018. Picture taken November 14, 2018. REUTERS/Jason Lee - RC13639C1D90

China is poised to use AI to enhance its competitiveness in tech and business. Image: REUTERS/Jason Lee

Wei Zhu
Chairman, Accenture Greater China
This article is part of: Annual Meeting of the New Champions

Artificial intelligence (AI) is arguably the most disruptive technology of the information age. It stands to fundamentally transform society and the economy, changing the way people work and live. The rise of AI could have a more profound impact on humans than electricity.

But what will the new relationship between humans and intelligent machines look like? And how can we mitigate the potential negative consequences of AI? How should companies forge a new corporate social contract amid the changing relationship with customers, employees, government and the pubic?

In May, China announced its Beijing AI Principles, outlining considerations for AI research and development, use and governance.

In China, the zeitgeist around AI has been more intense than around other emerging technologies, as the country is positioned to harness the tremendous potential of AI as a means to enhance its competitiveness in technology and business.

According to Accenture research, AI has the potential to add as much as 1.6 percentage points to China’s economic growth rate by 2035, boosting productivity by as much as 27%.

In 2017, the central government launched a national policy on AI with significant funding. The country already tops the AI patent table and has attracted 60% of the world’s AI-related venture capital, according to Tsinghua University’s report.

We’re already seeing the impact of AI across many industries. For example, Ping An, a Chinese insurance company, evaluates borrowers’ risk through an AI app. On the other hand, AI has generated a plethora of fears about a dystopian future that have captured the popular imagination.

Indeed, the unintended consequences of disruptive technologies – whether from biased or misused data, the manipulation of news feeds and information, job displacement, a lack of transparency and accountability, or other issues – are a very real consideration and have eroded public trust in how these technologies are built and deployed.

However, we believe, and history has repeatedly shown, that new technologies provide incredible opportunities to solve the world’s most pressing challenges. As business leaders, it is our obligation to navigate responsibly and to mitigate risks for customers, employees, partners and society.

Although AI can be deployed to automate certain functions, the technology’s greatest power is in complementing and augmenting human capabilities. This creates a new approach to work and a new partnership between human and machine, as my colleague Paul Daugherty, Accenture’s Chief Technology and Innovation Officer, argues in his book, Human + Machine: Reimagining Work in the Age of AI.

Are business leaders around the world prepared to apply ethical and responsible governance on AI? From a 2018 global executive survey on Responsible AI by Accenture, in association with SAS, Intel and Forbes, 45% of executives agree that not enough is understood about the unintended consequences of AI.

Of the surveyed organizations, 72% already use AI in one or more business domains. Most of these organizations offer ethics training to their technology specialists. However, the remaining 30% either do not offer this kind of training, are unsure if they do, or are only just considering it.

As AI capabilities race ahead, government leaders, business leaders, academics and many others are more interested than ever in the ethics of AI as a practical matter, underlining the importance of having a strong ethical framework surrounding its use. But few really have the answer to developing ethical and responsible AI.

Responsible AI is the practice of designing, building and deploying AI in a manner that empowers people and businesses, and fairly impacts customers and society – allowing companies to engender trust and scale AI with confidence.

It is imperative for business leaders to understand AI and make a top-down commitment to the responsible use of AI. Central to this is taking a human-centric approach to AI thinking and development. It is not enough to have the correct data, or an algorithm that performs accurately. It is critical to incorporate systems of governance, design and training that provide a framework for successfully implementing AI in an organization.

A strong Responsible AI framework entails mitigating the risks of AI with imperatives that address four key areas:

1. Governance

Establishing strong governance with clear ethical standards and accountability frameworks will allow your AI to flourish. Good governance on AI is based on fairness, accountability, transparency and explainability.

2. Design

Create and implement solutions that comply with ethical AI design standards and make the process transparent; apply a framework of explainable AI; design a user interface that is collaborative, and enable trust in your AI from the outset by accounting for privacy, transparency and security from the earliest stage.

3. Monitoring

Audit the performance of your AI against a set of key metrics. Make sure algorithmic accountability, bias and security metrics are included.

4. Reskilling

Democratize the understanding of AI across your organization to break down barriers for individuals impacted by the technology; revisit organizational structures with an AI mindset; recruit and retain the talent for long-term AI impact.

The benefits and consequences of AI are still unfolding. China has a great opportunity to capitalize on AI in its development and shares a huge responsibility with other countries to help it deliver positive societal benefits on a global scale. We must work to ensure a sound global public policy environment that works to enable and encourage investment in the development and deployment of responsible AI.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Fourth Industrial Revolution

Related topics:
Fourth Industrial RevolutionEmerging TechnologiesGeographies in Depth
Share:
The Big Picture
Explore and monitor how Fourth Industrial Revolution is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Why is human-first design essential to the future of the internet?

Matt Price and Anna Schilling

November 20, 2024

We asked 4 tech strategy leaders how they're promoting accountability and oversight. Here's what they said

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum