Emerging Technologies

Responsible AI: 6 steps businesses should take now

Woman speaking to men in business attire at lecturn: For successful adoption of artificial intelligence (AI), business leaders must seek alignment on responsible AI and trust from their employees.

For successful adoption of artificial intelligence (AI), business leaders must seek alignment on responsible AI and trust from their employees. Image: Getty Images

Prasad Sankaran
Executive Vice-President, Software and Platform Engineering, Cognizant
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
  • Trust is fundamental for successfully adopting artificial intelligence (AI), particularly generative AI (GenAI).
  • Implementing responsible AI requires collaboration between the public and private sectors and the adoption of new practices within enterprises.
  • Here are six actions businesses can take to implement responsible AI.

At the World Economic Forum’s 2024 Annual Meeting in Davos, Switzerland, my fellow Cognizant executives and I engaged in in-depth conversations with hundreds of global leaders about responsible artificial intelligence (AI).

We were happy to contribute to the AI Governance Alliance briefing papers on safe systems, responsible applications and resilient governance. While perspectives varied on needed focus and action, there was unanimous agreement that better management of AI risks was an urgent priority.

It is clear that trust will be at the core of successful AI adoption. Trust will help us to scale and realize the potential of GenAI, the most revolutionary new technology in a generation. When consumers experience these disruptive new solutions that feel like magic, they are instinctually sceptical; trust must be earned from the start.

Have you read?

Creating trusted AI

As trust is a critical factor, let’s unpack what trust is and how it’s obtained. In 1995, professors from Notre Dame and Purdue published a model for trust that has been widely adopted and is highly applicable to AI-powered services. The model proposes that trust is derived from the perception of ability, benevolence and integrity. What we heard at Davos aligns with this model and helps make sense of the challenges in front of us.

First, trust in AI systems rests on their ability to solve real-world problems. The ability to be useful in a real-world scenario isn’t something we can take for granted – I’ve seen amazing demonstrations of GenAI only to be slightly underwhelmed when trying out these tools in the real world.

AI solutions that overpromise and underdeliver will cause major trust issues in the long run. We’ve seen this from chatbots and voice assistants that promise conversational convenience but deliver limited understanding and static decision trees. Users were disenchanted, and the technologies’ promises went unfulfilled.

To make AI systems useful, we must focus them on the right problems, support them with relevant and high-quality data and seamlessly integrate them into user experiences and workflows. Most importantly, sustained monitoring and testing are needed to ensure AI systems continuously deliver relevant, high-quality results.

The second area that drives trust is the idea of benevolence. AI models must positively impact society, businesses and individuals or they will be rejected. There are two core challenges:

  • Implementing for positive impact. We must ensure that enterprises implement AI responsibly and wholly address adverse impacts. They must block unacceptable use cases, respect intellectual property rights, ensure equitable treatment, avoid environmental harm and enable displaced workers to access alternative employment.
  • Preventing malicious use. It is not sufficient for responsible enterprises to implement benevolent AI; we must also safeguard against malicious use. Governmental and regulatory entities must take steps to endorse legitimate providers and eliminate bad actors. Core issues include validating providers, validating genuine content, preventing new AI-powered attacks and moderating digital distribution channels.

Finally, integrity creates trust when users perceive their services as secure, private, resilient and well-governed.

Technologists and enterprises have spent decades building web-scale infrastructures and cloud-native architectures that power mission-critical digital services. The practices that allow the world to rely on these services must be extended and adapted to AI capabilities transparently, convincing to user communities.

The only way to achieve this requisite integrity is to adopt platforms incorporating transparency, performance, security, privacy and quality. Building parallel point use cases based on localized objectives and siloed data is a dangerous path that will lead to increased cost and risk, poor outcomes, and ultimately, a collapse of system integrity.

Loading...

The challenge of implementing responsible AI

While it’s good to have clarity regarding objectives, it’s undeniable that we face a daunting challenge. Addressing responsible AI will require collaboration between the public and private sectors across various issues. It will also require adopting new practices within the enterprise to design, engineer, assure and operate AI-powered systems responsibly.

We don’t have the luxury of waiting for someone else to solve these challenges. Whatever your role and industry, you can be sure competitors are advancing AI implementations, employees are covertly using untrusted solutions and bad actors are devising new ways to attack and exploit weaknesses.

Based on our experience helping to build responsible, enterprise-scale AI in hundreds of organizations and the core of our own business, we believe enterprises must act in six areas:

1. Align leadership with consistent vision and accountability

AI is a CEO-level issue that requires collaboration across all functions of the organization. Leadership teams should discuss the issues surrounding responsible AI and then agree on areas of opportunity, approaches to governance, threat responses and accountability for actions.

2. Address human factors from the start

Cognizant’s research indicates that distrust from employees and consumers, among other things, could hinder GenAI’s growth. While people believe GenAI will simplify technology interactions and increase corporate profits, they fear it won’t benefit workers or society, causing job insecurity.

Businesses must address these concerns with transparency and direct communication. This approach pays off, as the study shows that enthusiasm for GenAI increases with greater understanding.

3. Manage standards and risks

Establish a governance, risk and compliance framework to standardize good practices and systematically monitor AI-related activity. Within this framework, it is critical to consider the full scope of an AI-powered system, including training data, AI models, application use cases, people impacts and security.

4. Create a focal point of expertise

Responsible AI cannot be managed without centralized transparency and oversight of activity. By creating an AI centre of excellence, businesses can make the best use of scarce expertise and provide a coherent view to leadership, regulators, partners, development teams and employees.

5. Build capacity and awareness

Sustaining responsible AI practices requires everyone in the enterprise to understand the technology’s capabilities, limitations and risks. All employees should be educated on responsible AI, the organization’s vision and governance processes. Select groups will then require further assistance through training and coaching to take a more hands-on role in developing and leveraging AI solutions.

6. Codify good practice into platforms

AI is a pervasive, horizontal technology that will impact almost every job role. For teams to build trustworthy solutions quickly, they will need the data and tools for the job. Platforms for AI can make sharable assets accessible for reuse, ensure effective risk management is in place and provide transparency to all stakeholders.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

By addressing these six action points, organizations are set up to operationalize their position on responsible AI and execute and govern AI activities effectively. We believe every organization adopting AI or exposed to AI-powered threats must implement responsible AI and they must do it now.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Digital public infrastructure is transforming lives in Pakistan. Here's how 

Tariq Malik and Prerna Saxena

July 12, 2024

About Us

Events

Media

Partners & Members

  • Sign in
  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum