These are the flaws of AI in hiring and how to tackle them
AI systems that measure personality can lack reliability and be integrated into hiring without the knowledge of recruiters or applicants, an expert warns. Image: Pexels/Karolina Grabowska
- Companies are increasingly recruiting staff using AI-based algorithms, but this can introduce bias and perpetuate disparities in hiring, research has shown.
- AI systems that measure personality can lack reliability and be integrated into hiring without the knowledge of recruiters or applicants, an expert warns.
- Until these tools are better regulated, job seekers need strategies to pass through AI-powered processes, such as using the technology to screen their resumes before sending, she says.
So how do candidates dodge problematic AI hiring tools?
For many workers, the holiday season will hold career uncertainty after they or someone in their life has been laid off. Companies like Meta, Amazon, and more have cut thousands of staffers—and the list is growing. One recent report by Joblist on workers found that nearly a quarter of employed American job seekers say they or someone they know has been impacted by layoffs this year.
If you’re in search of a new job, you may encounter one emerging way companies are recruiting and hiring: AI-based algorithms. And experts point out that it’s not always clear how these algorithms influence the process.
“Thanks to AI, hiring is undergoing a quiet, yet earth-shaking revolution. Many Fortune 500 companies employ AI-based solutions to weed through the millions of job applications the companies receive every year,” researchers from NYU Tandon wrote in a recent project. “The problem: many companies don’t want to reveal what technology they are using, and vendors don’t want to reveal what’s in the black box, despite evidence that some automated decision making systems make biased or arbitrary decisions.” Their argument: hiring algorithms, and the companies that use them, need to be held accountable.
Beyond being opaque, algorithms have been shown to introduce bias and perpetuate disparities in hiring. But regulation is on the way. In 2023, a New York City law restricting the use of AI tools in the hiring process will go into effect—although it’s still unclear how regulators will be able to enforce it.
Why do companies use hiring algorithms?
Mona Sloane, a senior research scientist at the NYU Center for Responsible AI who worked on the project, says companies rely on automated decision systems for two reasons: to analyze a large volume of applicants, or to find workers with a niche skill set or level of experience. Employers looking to fill internship, sales, or service industry jobs, for example, turn to algorithms to narrow down the pool of candidates. But for niche roles in tech like software engineers, machine learning engineers and data scientists, for example, companies may use automated decision systems to find these workers and reach out to them even if they’re not actively looking for a job.
Ultimately, employers are notoriously risk-averse, says Denise Rousseau, professor of organizational behavior and public policy at Carnegie Mellon University. Often, they’re looking for any reason to disqualify candidates—including long stretches of unemployment—as they narrow their options.
The flaws of AI hiring
In a new paper, Sloane recently found that AI systems built to measure personality were not reliable testing instruments. And while not all companies use personality assessment tools in their hiring processes, she’s particularly concerned about when tools are integrated into hiring without the knowledge of recruiters or applicants.
If human resource teams and job seekers don’t know how AI technology is affecting the evaluation process, the algorithm could be amplifying bias and discriminatory practices without anyone noticing at first, Sloane says. And the applicants seeking jobs will ultimately suffer for it.
“Right now, candidates don’t have knowledge and they don’t have recourse. And the thing is that recruiters don’t even know why certain candidates are on page one of the ranking, or why certain people are on page ten of the ranking when they search for candidates,” Sloane says. “So there is just a whole lot of transparency around this that has delayed developing mechanisms for the recourse.”
Thankfully, Sloane said she’s had conversations with regulators across the local, state, federal and international levels regarding how to regulate these algorithms. And new laws like New York City’s have her hopeful. International regulation may also be on the way: she points to a European Union proposal that could also limit the use of AI technology.
How to tackle hiring algorithms as a job candidate
But until these tools are better regulated, job seekers need strategies to pass through AI-powered processes.
Sloane suggests techniques informed by AI itself to help optimize for hiring algorithms. For one, submit a simplified version of your résumé, with straightforward design and text, so an AI can interpret it accurately. You can consider making multiple versions of résumés for both AI and human review. And using software like Jobscan and VMock can help check and enhance your résumé for AI, all before you submit.
If your résumé is in danger of being screened out—like if it shows long gaps in employment—another solution is to make yourself as a candidate look as productive as possible, like continuing your education or picking up microcredentials.
And as legislators work through how to regulate emerging hiring tech properly, both Sloane and Rousseau recommended connecting with employees at the company where job seekers are seeking an interview. For employers reviewing applications, knowing that an applicant has a connection to the company might indicate that they have a better understanding of how it works, Rousseau says.
Applying for a job can be overwhelming—even without algorithms in the way. But while lawmakers work out how to regulate this technology, job seekers and recruiters should know the limitations of tech-enabled hiring, and bring people back in to correct its shortcomings.
How is the World Economic Forum ensuring the responsible use of technology?
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Future of Work
Related topics:
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Emerging TechnologiesSee all
Michele Mosca and Donna Dodson
December 20, 2024