A more inclusive future of work: Understanding how unconscious bias can affect your hiring software

Early hiring AI was only as unbiased as its data set. Here’s how unconscious bias can creep in and how to address it.

A more inclusive future of work: Understanding how unconscious bias can affect your hiring software

In its early years as a hiring tool, artificial intelligence was lauded as the solution to the longstanding problem of bias. Without the cultural and social subjectivity of humans, it was believed, AI could recommend candidates based on their objective qualifications, rather than on traits that did not impact the work itself.

As AI was implemented in hiring software, however, it became clear the answer wasn’t that simple. Early uses of AI began to replicate and entrench certain biases. Some hiring teams found they weren’t even aware of the biases at play in their hiring processes until they saw early AI repeat those biases back to them.

While we’re largely aware of issues related to AI and conscious bias, unconscious bias can slip through. Understanding how unconscious bias interacts with hiring software can help hiring teams make better choices. To achieve fair results in AI, the model needs to be trained on accurate and representative data, rather than data that contains historical biases.

Making unconscious bias conscious

Early experiences with hiring AI have raised our awareness of potential issues related to AI and systemic and conscious bias. Yet we’re not always aware of the biases that implicitly drive our behavior.

Unconscious bias is a prime example of a situation in which we don’t know what we don’t know. To identify unconscious bias in ourselves and in the hiring software we use, the first step is to bring awareness to the potential of bias.

“Awareness training is the first step to unraveling unconscious bias, because it allows employees to recognize that everyone possesses them and to identify their own,” says Harvard Business School professor Francesca Gino.

In some cases, organizations may need to overcome unconscious biases their own teams have toward believing themselves to be distinctly objective or unbiased.

A Yale University study of the effect of unconscious gender bias on the hiring of scientists found that not only did bias result in women being chosen for scientific positions less often, but the scientists doing the hiring were also more likely to resist news that they harbored unconscious biases toward female applicants.

“Whenever I give a talk that mentions past findings of implicit gender bias in hiring, inevitably a scientist will say that can’t happen in our labs because we are trained to be objective,” says microbiologist Jo Handelsman, the study’s lead author.

Including HR teams in discussions about how AI-generated hiring information will be used is essential, because this information can impact hiring in a number of crucial ways. “Unless there is an incredibly strong ethical component to that decision-making, the likelihood that organizations will make decisions for financial reasons based on an algorithm’s output but that are ethically suspect is quite high,” says Brian Kropp, group vice president and chief of HR research at Gartner. Bringing unconscious bias into conscious awareness is an essential first step in ethical AI use.

person on laptop at home; unconscious bias and hiring software concept

Who trained your data set?

Early hiring AI often replicated certain biases, conscious or otherwise. While the biases replicated might vary according to the tool used, the underlying cause was often the same: the AI produced biased results because the data set on which it was trained harbored those biases.

In 2018, for example, Amazon abandoned its efforts to build an AI-enabled platform they believed could improve the company’s recruiting, after the company realized the results were biased in favor of male applicants. Upon digging into the data, Amazon realized that the AI was rating female candidates less favorably because the data on which it based its decisions was unfairly weighted toward male applicants.

Most of Amazon’s technical posts were filled by men. Spotting this pattern, the AI determined that being male was a factor that weighed in favor of certain candidates’ success in the technical positions. It recommended men accordingly.

“In effect, Amazon’s system taught itself that male candidates were preferable,” writes Jeffrey Dastin, technology correspondent at Reuters. It did so based on the data sets it had, which indicated that male candidates were more common in the technical roles.

Unconscious bias affects everything we do, including which data we gather and how we interpret the results of computer analysis of that data. A system that is trained on a limited data set can end up showing us our own biases. Such a result is particularly likely when a company limits its AI’s data set to its own employees or applicants: a pool of professionals that has already been shaped according to the conscious or unconscious biases of those responsible for hiring.

Even when the “data set” is collected or created by humans rather than analyzed by AI, structure is beneficial. In a study published in the journal Sex Roles, Jennifer DeNicolis Bragger and fellow researchers found that structured interviews, in which every applicant was asked the same set of questions, helped to reduce bias against pregnant applicants for both high school teaching and sales representative roles.

When teams understand how unconscious bias affects their decisions and the records of those decisions, they can make more informed choices regarding the tools they choose to help them analyze applications and plan career paths for existing team members.

Woman enjoying nature on sunset background; unconscious bias and hiring software concept

AI and hiring: moving beyond bias

Fighting bias is part of the attempt to promote diversity and inclusion in hiring and in company cultures, writes career and executive coach Kathy Caprino. To this end, fighting bias is essential, particularly for companies that want to see their teams and performance benefit from a diverse, included workforce.

Pressure to keep up with an onslaught of new information, plus the desire to end bias in hiring, have prompted many organizations to look toward AI as a way to address both challenges. And for good reason. Properly applied, AI can screen applicants more fairly than human recruiters can.

“While human recruiters or interviewers might be impacted by whether they are having a particularly busy day or whether they were sleep-deprived the night before, facial and voice recognition software analyzes every candidate the same way,” write lawyers Gary D. Friedman and Thomas McCarthy in the American Bar Association Journal.

Yet Friedman and McCarthy warn that AI cannot be trusted with this task unless it’s monitored to ensure it doesn’t replicate existing biases in the hiring process. Software that operates on a biased, limited data set or with other flaws simply replicates the problem, only at a larger scale.

Expanding our own perspectives is a powerful way to fight bias in human interactions. “When we take on new perspectives and learn to empathize, we recognize the value in a diversity of representation,” says Michelle Y. Bess, vice president of talent and diversity, equity and inclusion at fintech platform OppLoans.

Similarly, expanding AI’s perspective by enlarging its available data sets beyond the boundaries of the organization is a powerful way to address the software’s tendency to replicate biases with a limited historical data set. The more information both humans and AI have, the more likely we are to maintain a perspective that allows us to step away from old reactions.

Metrics matter, too. To examine biases in your hiring process and gain another perspective into your software’s performance, look at your organization’s entire set of diversity metrics, not merely those related to hiring. Examining the demographics of job applicants, job offerees, candidates who accept offers, and those who do well in their first six months, for example, can help you understand where biases may negatively affect the hiring process, writes diversity consultant Howard J. Ross.

Early attempts to use AI to improve hiring met with challenges, notably the challenge of facing the results of our own past biases at work. The more perspective human resources professionals and hiring teams gain on bias, however, the better equipped they are to choose software that is also built to challenge bias, and to ensure that software provides the results their hiring efforts seek.

Images by: Scott Webb, davidagudelo/©123RF.com, ipopba/©123RF.com

You might also like...