7 Feb, 2022

AI Recruiting Tools May Be the Future, But Proceed With Caution

The vision is simple: Artificial Intelligence lends lightning-fast computing power and machine learning to make recruiting easier — and, even more importantly, removes the element of human bias.

But we’re not quite there yet. Technology—and AI recruiting technology in particular—is only as free of bias as its code. Or, as Charlotte Burrows, chair of the U.S. Equal Employment Opportunity Commission (EEOC) recently warned, “We must work to ensure that these new technologies do not become a high-tech pathway to discrimination.”

A recent study found that 42% of employers felt the pandemic boosted their digital transformation efforts, and 79% agreed that their organization’s digital maturity had increased. This boost in technology use has prompted more companies to introduce AI within their day-to-day operations. Which begs an increasingly important question: when AI recruiting tools introduce bias, what’s gone wrong, and why?

The following case studies—alongside some AI tools that are helping companies like Enspira HR employ diverse hiring practices—paint the whole picture of the who, what, when, where, and why of AI recruitment tools, and how companies can ensure AI boosts both efficiency and equitable hiring decisions.

TWO CAUTIONARY TALES

Want “responsible AI”? Equitable tenets must be built into the autonomous systems from the very beginning. Or, as Julie Sweet, Accenture’s CEO and leader of the Business Roundtable technology committee recently said, “You cannot reverse engineer responsible AI.”

Consider these two failed tools, for example:

Amazon’s Secret AI Recruiting Tool Penalized Women. In October 2018, Amazon scrapped a secret recruiting tool that used a five-star system to rank potential hires. The problem? The tool taught itself using the past 10 years of resumes — a period dominated by male hires. It therefore penalized resumes that included the word “women’s” and downgraded graduates of two all-women’s colleges.

Because Amazon’s efforts perpetuated bias—and problems with data underpinning the AI’s decision-making led to results for candidates being returned “almost at random,” according to Reuters’ reporting—the project was eventually abandoned.

HireVue’s Facial Recognition Software Used “Snake Oil” Metrics. In 2019, AI researchers blew the whistle on facial recognition software produced by the recruiting-tech firm, HireVue. They claimed the AI tool was grading applicants on “snake oil”metrics like facial movements, tone of voice, and mannerisms that had no actual bearing on the “employability score” on which the system scored applicants.

HireVue used algorithms to assign traits and qualities to the candidate’s facial expressions during video interviews, factoring them into a candidate’s overall score.

According to Merve Hickok, a lecturer on AI ethics and founder of Lighthouse Career Consulting, categorizing expressions is highly problematic and cannot not be used to infer traits.“Facial expressions are not universal — they can change due to culture, context, and disability — and they can also be gamed.”

In January 2021, HireVue announced it would no longer offer the facial analysis component within its screening assessment tools. 

CAN AI RECRUITMENT TOOLS HELP MITIGATE BIAS?

Most experts agree AI bias can eventually be eliminated—so what’s required for responsible AI in recruiting? A recent paper created by Business Roundtable, a group of over 230 CEOs, includes 10 Core Principles for Responsible AI:

  1. Innovate with and for diversity.
  2. Mitigate the potential for unfair bias.
  3. Design for and implement transparency, explainability, and interpretability.
  4. Invest in a future-ready AI workforce.
  5. Evaluate and monitor model fitness and impact.
  6. Manage data collection and data use responsibly.
  7. Design and deploy secure AI systems.
  8. Encourage a company-wide culture of responsible AI.
  9. Adapt existing governance structures to account for AI.
  10. Operationalize AI governance throughout the whole organization.

There are a number of promising signs for certain segments of AI recruiting tools. First, though no federal regulatory framework currently exists for AI business tools, it’s likely on the horizon: Several states, including Illinois and New York, have introduced legislation to that effect, and in 2021, the European Union proposed strict rules for AI use that could be a benchmark for future regulation.

Without a doubt, AI will play a major role in how companies recruit and hire in the future. While the kinks are still being worked out, here are some existing AI tools that can be used to promote equity:

  • Textio, an AI-assisted augmented writing tool, is used to ensure listings have gender-neutral language.
  • AI recruitment platform Talenya  goes beyond scanning job descriptions, helping employers analyze job requirements within a listing to increase diverse talent participation in the hiring pipeline.
  • Diversio  improves DEIB initiatives by integrating with a company’s communication platform and flagging  cultural insensitivity and unconscious bias. The tool’s social media barometer also tracks companies’ mentions to measure their public perception, then identifies programs and policies from a catalog of more than a thousand validated solutions to improve diversity and inclusion efforts.
  • Eightfold’s talent acquisition software makes it easy for employers to cast a wide net and increase diversity. Its algorithm uses billions of data points pulled from multiple sources including career pages, resume databases, job census, and company data to break down an applicant’s resume into skills and match them with skills needed by employers. Additionally, it identifies applicants’ adjacent skills and fills the talent pool with candidates that may have otherwise been overlooked.
  • Clovers, an intelligent interviewing platform that integrates with video technologies like Zoom and Microsoft Teams to improve the interview scoring process, helping recruiters make informed decisions and keeping employers from relying solely on memory or notes.
  • Because they post 10-17% fewer skills on job sites and write less about themselves, applicants of color rank lower on search engines and thus are often overlooked. AI talent sourcing platforms like PlumEightfold, and Pymetrics utilize games and assessments to measure potential.