AI and hiring bias: Why you need to teach your robots well

The year is 2021, and you hold a single resume in your hands, the golden resume of the ideal candidate. How do you know it’s the ideal candidate? Easy. Your artificial intelligence hiring tool told you so.

Your AI system knows what great talent looks like because you trained it on the resumes of your existing high performers. This golden resume–your next, preordained high performer–looks exactly like all your previous high performers. So, the AI system knows everything is going to be great.

But you know different. Bias is brewing deep in this scenario, and it pre-dates your AI implementation. You are on the fast track to a monoculture that your AI system understands as gospel that it’s sworn to uphold.

AI didn’t invent bias; it just took bias viral

According to a 2019 Mercer report, 40% of companies are using AI to screen and assess candidates. Some see AI as a tool to remove unconscious bias, but that hinges on whether we can teach it better than we’ve been taught. The reality is that AI, like a human, is what it has been taught to be. But AI moves a lot faster, at a much larger scale. Nobody can proliferate prejudice like AI can.

- Advertisement -

AI bias starts with sourcing. Who applies for your job is dictated by who sees your post. Social networks provide companies a massive bullhorn for their openings. But how do these platforms select who sees your job ad? LinkedIn gets paid per click. Ergo, its algorithm shows your ad to the people most likely to click on it. Its goal is to spend your daily budget as efficiently as possible.

Tools like LinkedIn are excellent for reaching a large audience, but supplement them with your own local outreach efforts. Partner with schools, community groups, and meetups to build a talent funnel that isn’t reliant on the algorithms of others. And don’t just stick to your old standbys. Push beyond your comfort zone to forge new relationships with fresh prospects.

See also: 6 HR tech start-ups changing the company culture, EX market

Words matter

One of the superpowers of AI is its ability to filter huge amounts of data. You can leverage AI to analyze everyone in your pipeline, not just the few applicants your human recruiters have time to review. AI tools like natural language processing flag resumes that speak your company’s language, or at least the language of the job posting. This is great, until you start noticing the unconscious bias in your native tongue.

- Advertisement -

NLP technology looks for and determines the intent of phrases. If your NLP is watching for “masculine”-coded phrases like “ambitious, confident leader,” it may discount “feminine” wording, favoring certain applicants based on their word choice. That’s what killed Amazon’s foray into AI recruiting technology.

Audits are essential

If you use AI to narrow your pipeline, periodically review some of the rejects. Examine why they were rejected. Should they have been? AI requires regular care and feeding. If you’re finding a lot of rejects that shouldn’t have been, it might be time for an AI training tune-up.

Random audits also allow you to take advantage of something I call the “Aladdin Principle.” AI can sort out a lot, but it can’t always spot the diamond in the rough. Humans can. While reviewing your AI tool’s rejected resumes, you may find your next great hire, and that person might not match your system settings at all. Go with your gut. Be open to serendipity.

Related: How AI and virtual reality can help boost employee experience

AI requires human oversight

Like a well-intentioned child, AI needs dynamic education and supervision. Humans must be involved in every AI process to review what is happening and validate that it should be happening.

In the grand scheme of human history, technology is the Johnny-come-lately. Yet, somehow, we’ve been socialized to interact with it as if it were infallible. It’s not. You don’t have to accept AI determinations as fact. Injecting healthy, human skepticism is actually a smart design decision. It’s called “human-in-the-loop.” Rather than relegating us to the role of intelligence recipient, it ensures people and AI work together to make the important decisions.

*

Tim Kulp is chief innovation officer at Mind Over Machines.

Tim Kulp
Tim Kulp is vice president of innovation & strategy at Mind Over Machines, a 30-year-old data and software consultancy with a human-centric approach to solving complex business technology issues. To contact the author, email [email protected].