Emerging Technologies

How to keep the 'human' in human resources with AI-based tools

Cropped image of business people working on laptops in office

AI-based tools can add value to human resources with human intervention. Image: Freepik.com

Matissa Hollister
Assistant Professor of Organizational Behaviour, McGill University
  • Organizations are increasingly using AI-based tools to manage human resources tasks.
  • HR professionals are concerned about the ethics of using AI, particularly when some products appear to be shrouded in mystery.
  • To help organizations overcome these challenges The World Economic Forum has created an HR toolkit for the responsible use of AI.

The past decade has brought a rapid expansion in the availability of and interest in artificial intelligence (AI) based tools to do HR tasks, but currently these tools risk being on the one hand oversold while at the same time overly feared. These opposing forces in turn make it difficult to implement AI-based HR tools in an effective and responsible way.

AI-based HR tools come in a wide range of forms, aiming to take on some aspect of HR work such as hiring, training, benefits, or employee engagement. They can be oversold when their creators rely on the aura and mystery surrounding AI to promote their product, furthering the belief that AI is all-powerful and incomprehensible to the average person.

Have you read?

At the same time, the use of AI in HR raises concerns given AI’s potential for problems in areas such as data privacy and bias. These concerns are amplified in the HR context where decisions can have significant impacts of people’s lives. While the need for caution when deploying AI-based HR is well-justified, we should also acknowledge problems with the status quo, including well-documented patterns of bias as well as over-reliance on gut decisions. Too much fear of AI in HR will lead us to miss real opportunities to make HR processes fairer and more effective.

How can we promote the responsible use of AI-based tools in human resources?

For the past two years, I have been leading the World Economic Forum’s Human-Centred AI for HR project, where I have had the opportunity to work with over 50 experts from HR, data science, employment law, and ethics. Our project community included members with a wide range of views, but with a common desire to see a more thoughtful and responsible use of AI in this field. The end result was a toolkit for HR professionals, translating the broad principles of AI ethics into practical guidance.

We learned many important lessons along the way, as documented in the toolkit and accompanying white paper Human-Centred Artificial Intelligence for Human Resources: A Toolkit for Human Resources Professionals. But one area was unexpectedly challenging: an AI principle that is particularly salient in the HR context is the belief that humans should continue to have the final say in high-stakes decisions, in essence “keeping the human in Human Resources”. While there is widespread agreement on this AI principle, it can be surprisingly difficult to put this principle into practice.

The experiences of our project community members suggested that the users of AI-based HR tools tend drift toward two extremes. On the one extreme are users who accept the recommendations of the tool without question, buying into the powerful AI aura. On the other extreme are users who fear and distrust AI systems, leading them to ignore their recommendations or actively fight against the use of the tool altogether. Simply telling the user to make the final decision also risks reintroducing the problems of the status quo that the AI-based tools were aiming to address.

Source: World Economic Forum.
Source: World Economic Forum.

Keeping humans truly in charge of AI-based HR tools, therefore, is critically important but requires effort in three areas: bringing together and equipping people with the necessary skills; dispelling the AI aura; and establishing the necessary organizational infrastructure.

1. Involve multiple stakeholders and equip them with the necessary skills

Overcoming the fear of AI in HR requires involving multiple stakeholders in the process of selecting and adopting AI-based HR tools, including HR professionals as well as the workers who will be impacted by such tools. Furthermore, everyone should be encouraged to learn the basics of how AI systems work. Contrary to expectations, these basics are relatively easy to grasp (the toolkit provides more details, but the most important fact to know is that current AI systems are developed by looking for patterns in real-world data).

Armed with this basic understanding, a multi-stakeholder group of individuals can prove to be a more valuable resource for deciding on an AI-based HR tool than a data scientist or IT specialist. Technical experts are of course also valuable, but the multi-stakeholder group brings an understanding of how the organization actually works, what changes are needed to the status quo, and what is (and what is not) captured in the data that the AI systems will be using.

This greater involvement and understanding will lead to both the better selection of AI-based HR tools that fit with the organization, as well as more informed and less fearful users.

2. Provide a balanced understanding of AI tools

The creators of AI-based HR tools, meanwhile, should move away from selling AI as a mysterious and powerful tool and toward emphasizing thoughtful, understandable, and trustworthy designs. Not only is this the more responsible path, but with growing concerns around the potential ethical and legal challenges of AI in HR, organizations are increasingly valuing this approach.

A tool that is clear about how it works, provides explanations for its recommendations, and is less grandiose in its claims will provide organizations and users with a better understanding of how the recommendations generated by the AI algorithm can be combined with human input and oversight in an effective way.

3. Establish an organizational infrastructure

Finally, organizations should recognize that the effective use of AI-based HR tools requires considerable planning, with a key aspect of that planning centering on the human-AI relationship.

One aspect of this planning includes ensuring that the previous two steps are taken: that users have an adequate input into and understanding of the AI system, and that the tool being used is transparent and understandable. In addition, organizations need to give considerable thought and guidance on how employees should use the tool and combine its use with their own insights and judgements.

Harnessing the power of AI in human resources

Simply stating that the human will have the final say in a high-stakes decision is not sufficient. If users lack confidence or trust, the AI system is too opaque, or organizations are vague in how this process should work, the use of AI-based HR tools will slide toward the extremes of either the algorithm or the human having sole control. Both algorithmic and human decisions are imperfect, but with a clear understanding of those imperfections, careful thought, and purposeful practices organizations can work toward systems that instead build on the strengths of humans and machines to realize the full potential of AI in HR.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Artificial Intelligence

Related topics:
Emerging TechnologiesJobs and the Future of WorkFourth Industrial Revolution
Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

Here’s why it’s important to build long-term cryptographic resilience

Michele Mosca and Donna Dodson

December 20, 2024

How digital platforms and AI are empowering individual investors

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum