Manufacturing and Value Chains

Augmented intelligence: How explainable AI is changing manufacturing jobs for the better

Explainable AI, or XAI, can help address 'algorithm aversion' by providing insights into decisions made, thereby building trust.

Explainable AI, or XAI, can help address 'algorithm aversion' by providing insights into decisions made, thereby building trust. Image: Getty Images/iStockphoto

Julian Senoner
CEO, EthonAI
Stefan Feuerriegel
Professor, Institute of AI in Management, LMU Munich
  • Artificial intelligence has revolutionized the world of work, but the adoption of AI in settings such as manufacturing has been slow.
  • Explainable AI, or XAI, can help address 'algorithm aversion' by providing insights into decisions made, thereby building trust.
  • Taking an XAI approach enables both humans and machines to perform at their best in sectors such as manufacturing.

Work is a fundamental human activity that is currently undergoing a significant transformation due to breakthroughs in artificial intelligence (AI). However, the adoption of AI in various work settings, such as manufacturing, has been slow.

One key reason is that humans are often averse to trusting opaque algorithms. While the technical advances in AI are well-documented and subject to much hype, behavioural factors effectively hinder AI’s large-scale adoption in manufacturing.

This article argues how a special type of AI is needed to clear the roadblocks: explainable AI (XAI).

The end of human work?

There is a common narrative that suggests AI is poised to replace human work on a large scale. Our research in diverse manufacturing settings and a recent World Economic Forum report on AI in manufacturing reject that vision.

Instead, AI holds the potential to augment human intelligence to solve work tasks more effectively, while also enriching the work experience. The key to unlocking this potential lies in explainable algorithms, which contrast the often-opaque decision-making processes in conventional AI systems.

Despite its promise, the adoption of AI in the workplace has been slower than one might expect, and this reluctance has been partially driven by two critical factors. The first is 'algorithm aversion', a reluctance among humans to trust AI systems that operate as 'black boxes’, providing decisions without any clear rationale.

The second challenge is that the opaque nature of many state-of-the-art algorithms prevents domain experts from benchmarking AI-generated recommendations against domain knowledge, making it difficult to identify and rectify errors. These issues not only erode trust in AI but also limit the scope for effective human-AI collaboration.

This is where explainable AI (XAI) offers a significant breakthrough. Our multi-year research journey in the manufacturing sector leads us to a unifying conclusion: explainability is the missing ingredient that catalyses AI adoption in manufacturing.

Why? Explainable AI works as an interpreter that bridges the gap between complex algorithmic processes and human understanding. Much like how an interpreter can make complex information accessible to a layperson, XAI demystifies the intricate logic of complex algorithms.

By transforming the AI's 'black box' into recommendations with clear explanations, XAI fosters greater trust and enables more effective human-AI collaboration. Not only is XAI making AI more ethical and accountable, but our research shows that it also improves work experience and job performance.

While AI has the capacity to sift through massive datasets and identify patterns far beyond human capability, it is the symbiosis of human expertise and AI recommendations that truly unlocks productivity gains.

Experts with explainable AI (XAI) outperform AI

A compelling case study that elucidates the importance of XAI comes from our field experiments at Siemens. In the study, we compared the performance of two groups of factory workers in a visual quality inspection task of electronic products.

The first group was assisted by conventional 'black-box' AI, while the second had the benefit of an AI that provided visual heatmaps to explain its predictions of potential quality issues.

Example of Black-box vs. Explainable AI in a visual inspection task
Example of 'black-box' AI versus explainable AI in a visual inspection task

The results were striking: expert workers without explanations were more than three times more likely to erroneously override the accurate recommendations given by the AI. In contrast, those augmented by XAI knew better when to trust the AI and when to depend on their own expertise, thereby outperforming the performance of the AI system alone.

This shows that XAI is not just about smarter machine decisions; it's a transformative approach that enables both humans and machines to perform at their best.

XAI is not just beneficial for providing decision support to operators on the shopfloor; it can also help understand complex production systems, providing critical insights to manufacturing experts.

In another research project that we conducted in a semiconductor factory, we augmented process experts with XAI tools to explain root causes of quality issues. While the AI explained complex associations between production variables and quality outcomes, it took human expertise to turn these insights into effective improvement actions.

By cross-referencing the AI's explanations with their own domain knowledge, experts were able to design targeted experiments to confirm the underlying causes of quality losses. The result? Quality losses plummeted by over 50%, a testament to the efficacy of human-machine collaboration with AI-based explanations.

A compelling case for augmented intelligence

Our research strongly suggests that the future of manufacturing is not a battle of humans versus machines, but rather a collaborative enterprise that leverages the unique strengths of both.

Many work tasks will not and cannot be replaced or delegated to an AI. However, AI can augment human work and create more effective and efficient tasks – especially when the AI’s decisions are explained.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

By demystifying the "how" and "why" behind each algorithmic recommendation, XAI helps bring out the best of human problem-solving capabilities powered by AI-driven insights.

The popular debate should not be stuck in a ‘humans versus AI’ narrative, but instead explore the abundant opportunities that the synergy offers. Our research underscores that collaborations between AI and humans can be especially effective when solving problems that neither could tackle alone.

We thus call for a paradigm shift: rather than “replacing” humans with AI, it will be necessary to “augment” humans. This requires a completely different set of tools, and we deem one to be particularly relevant: XAI.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Advanced Manufacturing

Related topics:
Manufacturing and Value ChainsEmerging TechnologiesBusinessForum Institutional
Share:
The Big Picture
Explore and monitor how Advanced Manufacturing is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

Putting people first: a new imperative for manufacturing

Kiva Allgood and Fernando Perez

December 17, 2024

3:53

Manufacturing is booming in some nations. Here’s what they have in common

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum