AI hiring: Is it legally safe?

Outsourcing hiring to artificial intelligence (AI) is supposed to remove hiring bias and subjectivity from the process. Numerous federal, state and city laws govern how you recruit and hire workers and punish those who discriminate. Delegating some of the decision-making to an algorithm, the theory goes, removes human error and prevents failure to hire lawsuits.

For example, The Civil Rights Act of 1964 bars discrimination based on race, color, creed, sex and national origin. Similarly, the Age Discrimination in Employment Act (ADEA) protects workers over 40. The Americans With Disabilities Act (ADA) compels employers to provide reasonable accommodations to qualified disabled workers and applicants.

Human beings in the HR office who are responsible for hiring may have conscious or unconscious biases. These can result in a workplace where a particular sex, age, race is overrepresented. In theory, removing bias from the equation should leave employers with the best candidates based on knowledge, skills, and abilities. Several software packages attempt to do just that. If the hype is to be believed, the AI era should be the dawn of perfect meritocracy. But is that really the case?

Possibly, if employers follow common-sense guidelines in outsourcing hiring to AI. But there are plenty of traps that could land employers in legal trouble.

What is AI?

Artificial intelligence is the application of computer technology to human tasks. For example, a computer program that recognizes speech translates languages and screens resumes for qualifications is using artificial intelligence. AI promises a streamlined workplace decision-process. Here’s how.

Hiring for Attitude D

The software reads resumes, compares job descriptions and draws a shortlist of well-qualified applicants from a sea (or cesspool) of applications. Software packages have greater scope than hiring managers, who don’t know what they don’t know. A manager may recognize a Stanford IT degree, but not which Chinese or Indian schools produce the best programmers. Computer programs running AI are only limited by the information developers use to create algorithms. That knowledge base can be vast.

Recent developments using AI in hiring should, however, be approached with caution. Some of these new AI technologies may do more harm than good.

AI and hiring

AI should help eliminate human bias in the selection process. With AI, no one in the HR office can screen out protected characteristics. That makes it hard to claim your organization excluded them because of their age, gender or other protected characteristic. If you never see the candidate’s sex, name or age before you receive a list of qualified applicants, for example, you eliminate pre-interview selection bias.

A word of caution: the old adage ‘garbage in, garbage out” applies to AI in hiring. Make sure that you are providing the AI vendor with updated and accurate job descriptions and minimum qualification lists. If you are using outdated job descriptions, the AI program will search for candidates that don’t meet your current needs. They may be under or overqualified for your actual openings.

When outsourcing hiring to AI, make sure you know exactly what you are buying. Not all programs are created equal. Some are geared to specific industries, while others are more generalized and appropriate for screening for low-wage, low skill jobs. Most rely on electronic resumes or applications and screen only information submitted directly to the system. Some go further and search numerous online resume databases for possible candidates. These may identify candidates you never would have interviewed or who didn’t know you had openings.

It’s best to purchase a system from a company with a good track record and reputation. You may want to inquire about any past or pending litigation involving the AI system before signing on. Unfortunately, employers aren’t off the hook for hiring bias just because they outsourced their decision-making to AI.

Consider what happened when Amazon, the online retailer, began using AI to hire. The company had been building an internal AI computer program since 2014. The aim was to automate the search for top talent. The program gave candidates a score of one to five stars. Amazon essentially input 100 resumes and the program identified the top five candidates. It only hired from those five. Then came the discovery – AI was screening out women for tech jobs. The algorithm had used resumes accumulated over a decade – when most applicants for tech jobs were male. Essentially, the retailer was automating bias. It scrapped the program.

AI and facial recognition

Over the last few years, artificial intelligence has been used to develop programs that interview candidates. These rely on facial recognition to ‘read’ answers to questions and uses the results to rank candidates. For example, HireVue uses candidates’ computer or cellphone cameras to analyze facial movements, word choice, and speaking voice. It then generates an “employability” score based on characteristics important to the employer. The developer claims more than a hundred employers use the system, cutting hiring time from six weeks to five days.

HireVue and other hiring programs like it typically analyze current employees before screening new candidates. Then, using the employer’s ranking of current employees looks for candidates who score similarly, assuming they will be equally successful. While this may mean applicants screened who ‘fit’ current high achievers, it may also mean others are screened out. If those candidates belong to a different protected class, the disparate impact may be baked into the selection process. And that can spark lawsuits and EEOC interest.

The algorithm behind HireVue’s artificial intelligence is proprietary. That’s true of most other AI hiring programs, too. As a result, candidates rejected based on their employability score can challenge the results. Challenging the program as unfair and possibly deceptive, one rights group has filed a Federal Trade Commission complaint. It alleges that the system’s screening is “biased, unprovable and not replicable” and therefore invalid.

Before outsourcing hiring to AI systems, at a minimum ask what recourse your organization has if candidates sue you. Will the company testify on your behalf about their software’s reliability and efficacy? If not, think twice before signing up. There are also artificial intelligence designers who adhere to a set of design principles for making AI fair. This includes an auditing function looking for bias in the algorithm. Ask whether the program has been audited for bias before signing on.

AI and disability discrimination

The Americans with Disabilities Act (ADA) protects disabled applicants from discrimination based on disability. It also requires that employers make reasonable accommodations for disabled applicants during the interview and hiring process. Disabled individuals are twice as likely to be unemployed as people without disabilities. And non-disabled individuals significantly out-earn disabled individuals significantly. Outsourcing hiring to AI systems may exacerbate these trends and violate the ADA.

For example, what if the AI program used doesn’t have information on successful disabled employees to compare with candidates? Potentially, disabled applicants will be screened out. And what if a disabled applicant requests additional time for completing an AI screening program? Employers should ask AI in hiring vendors on how to handle reasonable accommodation requests. They should also ask whether various disabilities, including speech abnormalities and hearing and vision impairments, affect the algorithms used.

If your chosen AI system cannot handle reasonable accommodation requests, you should provide an alternative for disabled applicants. For example, you could interview disabled applicants directly, providing assistance for visual and hearing impairments.

Recent AI in hiring laws

Several jurisdictions have introduced or passed laws that regulate outsourcing of hiring to AI or even encourage it. For example, a group called Fair Hiring California has introduced a Fair Hiring Resolution in the California Assembly and Senate in 2020. It would promote the development of AI technologies to reduce hiring bias.

Beginning January 1, 2020, Illinois employers have to comply with the Artificial Intelligence Video Interview Act. It regulates employers’ use of artificial interviewing in the interview and hiring process. Employers who use AI to record an interview and analyze applicant responses must:

  • Inform applicants that AI will be used;
  • Provide applicants with a written explanation of the technology’s mechanics, including the traits reviewed and analyzed; and
  • Get prior consent.

Employers hiring with AI should monitor their state and local legislatures and stay alert to any new laws or regulations.