Recruiting

The Amazon Example: Can AI Discriminate?

Instinctively, it would seem that using a machine, data, or artificial intelligence (AI) to review job applicants would create a process that is fairer by default. Unlike people, the machine doesn’t have subconscious biases. It doesn’t have feelings or gut instincts. It should be able to make judgments and comparisons solely on merit and create a shortlist of the best candidates. It would simply look at experience and qualifications as required, and deliver the list of those that best meet the job criteria, regardless of other factors.

Source: bluebay2014 / iStock / Getty

That’s the theory, anyway.

In reality, machines, algorithms, etc., all have to be taught how to make their decisions. People – complete with our own unintentional but real biases and oversights – tell the machines what to value. All too often, we risk re-creating past discrimination, even when we try not to.

Let’s take a look at an example.

How AI May Discriminate in Recruiting: The Amazon Example

Amazon’s AI program is probably one you’ve heard of when it comes to problematic bias in their recruiting efforts in recent years. They created a system using AI to search for ideal job candidates. The problem was, the AI returned significantly biased results: it systematically preferred men over women. It even went so far as to discount degrees from all-female higher education facilities and de-prioritize resumes with items that singled them out as being female, such as belonging to specific all-female groups.

How does this even happen? The short answer in Amazon’s case appears to be the way the algorithm was taught to look for good prospects. The characteristics of good prospects were based on comparison with previous resumes received, which tended to be more from men than from women. In essence, it learned that men’s resumes – and the types of words men are more likely to use and the schools they were more likely to attend, etc. – were preferable. Obviously, we hope, Amazon had no intention to have the machine create such a clear bias, but the AI can only work with what it’s given.

Amazon, to their credit, did try to fix the algorithm to remove the bias—but couldn’t do it, and ended up having to shut down the AI recruiting as a result.

Human Bias Creates Machine Bias

The root issue here is that people create the AI. People, with all of our flaws and subconscious biases, tell the machines which data to use. People and organizations often start out with populations that do not reflect the desired level of diversity (or even a level of diversity that is in alignment with the regular population)—and if that is the basis the machine uses, how could we expect it to not continue to give the same results?

In other words, if we base the algorithm’s data on historical hiring practices, why would we think it would give us results that are any different than we have done before? It won’t, and if we give it info on who progresses in the company, even more biases are likely to be baked in.

AI is not neutral by default. We create it, therefore it reflects us and what it’s told to value. If we either give it historical data that is already biased OR give it inputs that are inadvertently biased (even if not based on historic data), either way we can have a problem.

The Amazon case is just one example, but it clearly shows how easy it is for the process to be inadvertently discriminatory. Stay tuned for tomorrow’s Advisor, where we’ll take a look at a few of the things organizations using AI in recruiting can do to minimize this risk.

Leave a Reply

Your email address will not be published. Required fields are marked *