Recruiting

Is AI Recruiting Discrimination Inevitable?

In yesterday’s Advisor, we discussed the idea that using artificial intelligence (AI) or big data in the recruiting process doesn’t eliminate problems with discrimination and bias. Perhaps counter intuitively, these methods can actually emphasize bias if we’re not careful, because the machine doesn’t know any better. It can only assess the (often imperfect) traits it is told to look for. That said, all is not lost. We can reduce this risk.

Source: FotografiaBasica / DigitalVision Vectors / Getty

AI Recruiting Discrimination is Not Inevitable

After reading yesterday’s Advisor and seeing the Amazon example of unintentional recruiting discrimination with AI, we can see the ease at which unintentional and harmful bias can remain even when humans are not making decisions directly. Eliminating discrimination in hiring may seem all but hopeless, but it’s not.

It is, however, still incredibly difficult.

Despite our best intentions, workplaces often end up skewed in ways that disadvantage specific groups. This may mean we cannot use past performance at the organization as a factor for the AI algorithm to consider when assessing likelihood of future success. That is clearly a hindrance.

The key is finding a way to give the AI inputs that do not already have bias baked in at the outset. And proactively looking for it and eliminating it when it happens anyway, as it likely will.

We do have options. We can re-examine our basis for making hiring decisions. Experience and education are only part of the picture when determining whether or not an individual may perform well in the organization. Other things like soft skills, problem-solving abilities, personality, emotional intelligence, and motivation can be factored in—and using more comprehensive data can yield better results. In other words, we need to add in more data points – ones that do not rely solely on historic hiring patterns or existing personnel as their foundation. By doing this, we can move toward having more data that is less likely to re-emphasize our existing internal unintentional biases. But it’s still not foolproof; we still have to tell the program what to value in a way that is not going to be discriminatory anyway.

Additionally, we can combat bias by auditing the system frequently. Organizations need to check and recheck the output to look for and address any biases that are found. Consider both auditing internally and with a neutral outside party. Another option is to allow the actual algorithm to be public; this scrutiny would be difficult, but could help uncover problems faster.

AI could help us address the issue of discrimination in the recruiting process, but it won’t do it without a concerted effort to improve the inputs and systematically check and improve the results.

Leave a Reply

Your email address will not be published. Required fields are marked *