Emerging Technologies

Scaling AI: Here's why you should first invest in responsible AI

Responsible AI is not just the morally right thing to do, it also yields tangible benefits.

Responsible AI is not just the morally right thing to do, it also yields tangible benefits. Image: Unsplash/Gerd Altmann

Abhishek Gupta
Partner Lead, World Economic Forum
Steven Mills
Partner and Chief Artificial Intelligence Ethics Officer, Boston Consulting Group (BCG)
Kay Firth-Butterfield
Senior Research Fellow, University of Texas at Austin

Listen to the article

  • Artificial intelligence can be transformative for businesses, but increased use of the technology inevitably leads to a higher rate of AI system failures.
  • But companies should first invest in responsible AI, which also yields benefits in accelerating innovation and helping them become more competitive.
  • A prioritization approach that begins with low-effort, high-impact areas in responsible AI can minimize risk while maximizing investment.

Artificial intelligence (AI) systems are creating transformative business value for companies who integrate it into their operations, products and services, and corporate strategy. But, unfortunately, increased use of the technology inevitably leads to a higher rate of AI system failures.

If left unmitigated these failures could harm individuals and society and will diminish the returns on investment into AI. So, what can organizations do?

Responsible AI – the practice of designing, developing and deploying AI with good intention and fairly impact society – is not just be the morally right thing to do, it also yields tangible benefits in accelerating innovation and helping organizations transition into using AI to become more competitive.

Yet, current approaches emerge dominantly from AI-native firms and may fail to meet the needs of non-AI native organizations which have different contexts, constraints, culture and AI maturity.

Companies need a purpose-fit and tailored approaches to achieve sustained success in practice. As more organizations begin their AI journeys, they are at the cusp of having to make the choice on whether to invest scarce resources towards scaling their AI efforts or channeling investments into scaling responsible AI beforehand.

We believe that they should do latter to achieve sustained success and better returns on investment.

Scaling AI leads to more failures

Modern AI systems are inherently stochastic – as they make use of randomness – and black box in nature, which means the system's inputs and operations are not visible to the user or another interested party.

In addition, they are built on top of complex technical pipelines that ingest, transform and feed data into downstream machine learning models to achieve business goals such as automated content moderation or enabling self-driving.

These modern marvels are a result of collaboration across a diversity of stakeholders, both internal and external, including data scientists, data engineers, UX/UI designers, social scientists, systems engineers, business executives, and more.

This combination of diverse human and technical inputs – indeed, AI systems are sociotechnical – introduces a plethora of new surfaces where failures can occur. In this highly intertwined sociotechnical architecture, there is a strong likelihood that failures go unnoticed. Even when failures are detected, they often require intense sleuthing to arrive at root causes.

Unfortunately, increased AI failures is a natural outcomes of scaling the technology. Simply put, deploying more AI systems increases the likelihood that a company experiences a lapse.

However, this should not discourage companies from pursuing the development and use of AI systems to realize business goals. Instead, companies just need take proactive steps to appropriately mitigate these risks and product customers and society from unintended harms.

Responsible AI can help firms minimize risk

Implementing a comprehensive responsible AI programme helps companies minimize these risks. This includes the policies, governance, processes, tools and broader cultural change to make sure AI systems are built consistent with organizational values and norms.

When properly implemented, responsible AI programmes reduce the frequency of failures by identifying and mitigating issues before the system is deployed. And while failures may still occur, their severity is lower, creating less harm to individuals and society.

But creating a comprehensive responsible AI programme takes time – three years on average. This means companies cannot wait for a system failure to get started. In fact, they shouldn’t even wait until they are ready to scale their AI efforts. Instead, they need to start early and mature responsible AI ahead of AI.

This ensures the right controls are in place to minimize the risk of scaling AI. And as an added benefit, it also increases the business value of the AI systems.

Not surprising given that many of the approaches that are central to responsible AI – such as stakeholder engagement, thoughtful UI/UX – also lead to better products that drive higher use and adoption.

The current state of the art in responsible AI mostly comes from AI-native companies that have invested heavily into approaches that require resourcing and support from internal staff and mature technical infrastructure. They often operate under product development timelines measured in years, giving them time for robust multi-stakeholder participation.

For smaller companies or those with budgeting constraints or operating on a tight timeline – for example, those who have received their first AI mandate and need to demonstrate results and returns quickly – it may not be possible to operationalize responsible AI following the model of AI-native companies.

Have you read?

They, instead, require a tailored approach that enables them to right-size their resources (both human and technical) and ensure that they be can sustained over a long-period of time to deliver lasting success. The answer for each company will therefore be unique based on their technical maturity, cultural nuances and practical resource constraints.

There are a few common approaches that lead to success. First, leverage existing risk, compliance and product development processes to simplify change management and make use of existing resources.

Next, grow expertise primarily through upskilling supplemented by targeted and thoughtful hires to minimize resources demands while you make early progress.

Finally, establish a framework to assess the inherent risk of AI products, enabling you to focus resources on the areas of highest risk or with the highest likelihood of failure.

Allocate AI resources effectively

Tradeoffs in resource allocation are always a tough choice, especially when you have a new mandate and have been tasked with generating early wins to affirm the trust placed in you for that investment.

Responsible AI can appear daunting with many sub-areas that require attention – such as fairness, transparency, accountability, safety, reliability, privacy, security, governance, etc – to be implemented across all stages of the AI lifecycle, including design, development and deployment.

All of this can either cause misdirection of attention and investments or worse, paralysis in action. Companies cannot, however, ignore the important moral and ethical duties in fielding an AI system.

Not to mention, that there is of course a strong business incentive in minimizing system failures as well. Companies can increase their odds of success by keeping in mind a few key considerations.

Loading...

Moving up the AI maturity curve is a well-worn path that allows organizations to systematically yield benefits from their AI investments, extracting business value while right-sizing resource investments to power that journey.

Responsible AI approaches developed and proselytized by organizations who are further along in that journey come with implicit (and often unstated) assumptions of underlying technical infrastructure maturity and availability of budgets and human resources to operationalize those approaches.

Instead, starting by identifying your stage in the AI journey and then implementing practices better suited to your position and capabilities improves chances of successful implementation.

The employees of an organization typically sign a code of conduct, and affirm their commitment to an organization’s mission, vision, purpose and values. In addition, there are unstated, implicit norms that are codified in everyday culture and practice within an organization.

Align responsible AI approach with your mission

Helping to align your responsible AI implementation approach with your organization’s mission, vision, purpose and values will increase the chances of success, for example, making an explicit link between fairness and inclusiveness to broader organizational environmental, social and governance commitments.

Sharing lessons and best practices that you develop with others in similar positions will not only help your organization fare better, it will also strengthen the broader ecosystem so that responsible AI capabilities become accessible to more players in the ecosystem as organizations move rapidly to adopt AI systems.

Remember that no single company has all the answers, even those far along on their responsible AI journey. Collaboration is a way to get even more from your resource investments.

Discover

How is the World Economic Forum ensuring the responsible use of technology?

Ultimately, the success of AI adoption, especially in a responsible manner, requires the correct ordering of investments and approaches. A prioritization approach that begins with low-effort, high-impact areas in responsible AI will minimize risk while maximizing investment.

In the end, scaling responsible AI before scaling AI lays out a paved road to success. Keeping in mind your organization’s technical maturity, cultural context, and resource constraints will ensure that you experience sustained success and built competitive market advantage all the while living your organization’s purpose and values.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Artificial Intelligence

Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

Here’s why it’s important to build long-term cryptographic resilience

Michele Mosca and Donna Dodson

December 20, 2024

How digital platforms and AI are empowering individual investors

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum