Jobs and the Future of Work

How to use AI hiring tools to reduce bias in recruiting

ai artificial intelligence machine learning diversity workplace intersectionality oppression opportunities gender race sex discrimination sexuality work office workforce employee employer staff disadvantage bias

The recruitment process is full of conscious and unconscious biases. Image: Unsplash/Christina @ wocintechchat.com

Nicolás Rivero
Writer, Quartz
This article is part of: The Jobs Reset Summit
  • AI tools are being used to reduce bias in recruiting.
  • These include machine learning tools to optimize job descriptions, to AI-powered psychological assessments of traits like fairness.
  • But there is inherent bias in AI, depending on the data used to train the algorithms.

Dozens of software firms have sprung up to sell companies AI recruitment tools, which they promise will cut bias out of their clients’ hiring processes. In promotional materials and press releases, they argue that human recruiters are irredeemably biased while machines are objective, so companies and job candidates alike will benefit from AI-driven hiring.

Have you read?

Of course, it’s not so simple. AI algorithms are not inherently objective, and hiring software can introduce new layers of bias and discrimination, excluding qualified job-seekers and leaving companies open to negative headlines.

But if companies apply AI in thoughtful ways, and maintain a healthy dose of skepticism toward AI vendors’ commercial claims, there are ways to use algorithms to cut down on bias in hiring.

Which companies sell AI hiring tools?

There’s an AI for every step in the recruitment process. Textio uses machine learning to help hiring managers optimize job descriptions. Facebook, LinkedIn, and ZipRecruiter target job postings to likely applicants, based on their algorithms’ recommendations.

Once the applications are in, AI vendors like Ideal, CVViZ, Skillate, and Sniper AI sell resume scanners that screen and surface promising candidates. And after you pull candidates in for an interview, HireVue, Modern Hire, and Harqen promise their algorithms can analyze those conversations to predict which applicants will perform best on the job.

There are also vendors that offer AI-powered psychological assessments of traits like steadiness, generosity, and fairness. Humantic analyzes applicants by trawling text from their applications and online profiles. Pymetrics claims it can do the same through a series of simple games.

How do AI hiring tools work?

The big question about all of the AI applications for hiring is what data they’re trained on. A machine learning algorithm is only as good as the quality of the information that goes into it.

Most AI recruiting tools harvest training data from current employees. To train a resume scanner, for example, a company would gather up current workers’ resumes and compare them to performance data like sales numbers or annual reviews. These data sets teach algorithms how to identify resumes that most resemble those of the company’s top performers.

Of course, that approach tends to surface candidates who look just like the existing workforce—effectively recreating the biases many companies are trying to overcome.

“AI is inherently biased because there is no such thing as unbiased data,” said Shervin Khodabandeh, who co-leads Boston Consulting Group’s AI business in North America. The challenge is even more pronounced in hiring, he explained, because the available data tends to be very limited: Companies only hire so many candidates, and it takes years to find out if they’ll become top employees.

AI vendors use workarounds to overcome the bias in their training data. According to Manish Raghavan, a Cornell PhD candidate who has studied hiring algorithms, many vendors design their algorithms to comply with the Four-Fifths Rule, a US legal principle that says applicants from any one demographic group must be selected at least 80% as often as those from any other group. So to shield companies from legal liability, a hiring AI may be programmed, for example, to recommend an equal number of male and female candidates.

Future of Work Systemic Racism Artificial Intelligence JRS2020
What an AI generated candidate profile looks like. Image: Cornell University

But Raghavan said that doesn’t mean the algorithm will perform equally well across demographic lines. If your training data comes from mostly male employees, the AI might learn to associate success with traits like having high school football on your resume or being called a “rock star” in a reference letter (a term that tends to describe men).

“If you have a bunch of features like that, you might wind up with a tool that’s very good at predicting the top performing men and mediocre at figuring out the top performing women,” Raghavan explained. If a company then interviews 10 of the most qualified men and a random sample of 10 women in the applicant pool, its hiring practices will appear superficially unbiased—but the job is still more likely to go to a man.

How have AI-powered hiring tools backfired?

Amazon infamously developed an AI resume scanner that was very publicly revealed to be biased against women in 2018. Designed to be the “holy grail” of recruiting AI, it penalized resumes that contained the word “women” (as in a women’s sport or club) and had a bad habit of recommending candidates with no relevant skills or experience.

HireVue drew strong scrutiny for its interview analysis AI, which analyzes candidates’ facial movements, word choice, and tone of voice. Ethics watchdogs from NYU’s AI Now Institute called it “pseudoscience” and “a license to discriminate” in a Washington Post story that named companies that used the software, including Hilton and Unilever. Amid the backlash, the US Federal Trade Commission fielded a formal complaint against HireVue, and Illinois passed a law requiring companies to disclose their use of similar AI tools.

While these episodes generated negative press coverage, McKinsey technology consultant Michael Chui said bad headlines shouldn’t be the main concern for a company thinking about injecting AI into its hiring process. “The biggest risk is not reputation risk,” he said. “It’s that you’re not getting the best qualified candidates.”

How can I avoid making the same mistakes?

The most important lesson, according to BCG’s Khodabandeh, is that companies should never blindly follow an AI’s recommendations, and human hiring managers should retain control over decision-making. “Nobody should trust any algorithm that tells you, ‘This is the person you should hire,’” he said.

Instead, Khodabandeh suggests flipping that dynamic on its head. Rather than asking AI to surface the best candidates and having a human hiring manager rubber stamp its decisions, companies should use AI to audit their own recruiting practices. The ideal AI for recruiting would pore over troves of hiring data—not to predict who the next best hire will be, but to point out who has tended to be successful in the past.

“One of the things good AI algorithms will do is tell you where your biases are. You’ll look at it and realize, I’m already favoring certain attributes unjustly and irresponsibly or unethically without even knowing it,” Khodabandeh said. “This is where a human needs to come in and actively remove those biases and make decisions about which attributes to look out for in the next round of recruiting and hiring.”

Companies should also be savvy about auditing any algorithms they use. Raghavan, the Cornell researcher, says most AI vendors will only share the inner workings of their algorithms with their clients. “A lot of clients don’t necessarily have the ability to evaluate the claims they’re making even when provided with the evidence,” he said.

Potential buyers who have in-house data scientists can take a look under the hood and kick the tires a bit. For those that don’t, Raghavan’s research suggests a couple of broad warnings: First, you can’t always trust—or even understand—the correlations that AI systems find between workers’ job performance and their face, voice, or behavior.

Second, the statistical bar is pretty low for algorithms that claim to offer psychological insights about job applicants. The correlations they draw just have to be, on average, a bit better than random. Whether they hold true for any particular individual is basically a crapshoot. Sometimes, the most high-tech tool a recruiter needs is their gut.

Loading...
Loading...
Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Systemic Racism

Related topics:
Jobs and the Future of WorkEmerging TechnologiesForum Institutional
Share:
The Big Picture
Explore and monitor how Systemic Racism is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Convening with purpose: The roadmap to a sustainable workforce in advanced manufacturing

Stephanie Wright and Kerry Ebersole

October 29, 2024

3:21

4 ways workplaces can help bridge the gender care gap

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum