AI is not a Quick-Fix Solution for Discrimination

Artificial intelligence is already past the tipping point of becoming a prevalent tool to help companies find more and better job applicants and decide who to hire or promote. However, the emergence of AI has sparked a fascinating and important debate: Do machines making decisions that were once made by humans reduce or even eliminate discrimination–or can they actually increase bias?

For Kate Bischoff and Heather Bussing, attorneys who specialize in job-bias issues, the crux of this complicated issue can be explained by this example: If an American tech company programs a computer using AI to identify and collect wedding pictures, the experts explain, it will likely end up with scores of beaming brides in white dresses and sheer veils. But it’s highly unlikely that the same machine would select any pictures of a traditional Indian wedding, where red is the preferred color for Hindu brides, who wear a colorful sari or lehenga and extensive jewelry.

“We’re adopting AI very fast, for all employers, and we need to be more careful and ask harder questions of our vendors,” says Bischoff, a Minneapolis-based attorney with tHRive Law and Consulting. She says the wedding case is just one example of what machines that recognize patterns can miss. Another would be a personal health monitor like a Fitbit telling a woman she needs to move around more–not realizing she’s pregnant.

For Bussing, these stories illustrate a simple truth: AI “can diminish bias, and it can also amplify bias.”

Bischoff and Bussing, a San Francisco-area lawyer who writes frequently on employment issues, presented a session with similar wording in its title–“How AI Technology Can Eliminate or Amplify Bias”–at the recent HR Technology Conference held in Las Vegas. Their message is essentially a warning that HR executives need to frequently test and evaluate their new AI systems for evidence of discrimination because ultimately it’s humans–not machines–who are responsible for the results.

“HR is becoming the keeper of the people data, and that is a powerful place that allows for all sorts of undue influence over all of the organization,” Bussing says. That makes it critical for HR executives to control any new AI systems rather than allowing the computers to rule them–to understand that computerized algorithms aren’t good at finding and fixing their own mistakes and can’t substitute for what Bussing notes humans bring to the table, such as “caring, empathy and responsible decision-making.”

The “Same-Same” Problem

Rapid advances in AI–the algorithms that now predict, for example, the next word as you type a message, or what song you’ll want to hear next on Spotify–have already led to an exploding roster of start-up companies in the HR space that market machine-learning systems to winnow out job applicants or judge potential hires based on their voice or facial expressions. Today, nearly 40 percent of companies use some type of AI for HR functions, according to a survey for Bersin, Deloitte Consulting LLP–and that number is expected to spike much higher in the next two years.

But while many see artificial intelligence as the wave of the future for HR executives seeking to recruit and retain talented, diverse applicants and make better talent-management decisions, skepticism remains. A survey conducted earlier this year by the firm Montage found that 57 percent of recruiters and talent managers remain on the fence about AI. One reason could be the mounting concern over whether this technology can fulfill one of its core promises: curbing subjective human bias.

The possibilities for misusing AI and amplifying discrimination became front-page news this fall with the disclosure that Amazon was forced to abandon a top-secret AI-hiring program it had been working on since 2015 because it discriminated against women.

Amazon’s aborted project aimed to help recruiters by giving applicants star ratings on a scale from 1-5, much like the products that the online retailer sells–but its ratings were based largely on data about its past applicants, who’d been predominantly male. Embarrassed Amazon insiders told Reuters the program even discounted resumes with the word “women’s” on activities, or with degrees from two predominantly women’s colleges.

“I call it ‘the same-same problem’,” Bischoff says. “If you’re saying who is a good manager and then you try to reach out and get the same thing you already have, [this] discounts having a diverse workforce [and] discounts having different perspectives in your workplace–not just diversity of gender and race … but people coming from different life experiences.”

That’s not just a legal problem. A recent McKinsey survey suggests that firms with a high rate of diversity among top executives gain as much as 21 percent in profitability, similar to a 2016 finding by the Peterson Institute for International Economics.

In their presentation about AI and bias, Bussing and Bischoff noted a number of points in the process where humans can make mistakes that send AI algorithms careening down the wrong track–either in what a machine-learning device is taught to do or, more frequently, in the decisions about which data are used to predict future results. Despite its advantages of speed and ability to make rational decisions uncolored by human emotions and subjectivity, AI, the two experts say, can essentially function like a child, with limited experience and reasoning based on a skewed perspective of what it has been taught.

Potential bias was among the topics addressed during a House Subcommittee on Information Technology hearing on AI in February. There, Charles Isbell–who studied AI at MIT in the 1990s and now is an associate dean at the Georgia Institute of Technology–told lawmakers that decades of bad assumptions about race, for example, can corrupt systems.

“It does not take much imagination to see how being from a heavily policed area raises the chances of being arrested again, being convicted again and, in aggregate, leads to even more policing of the same areas, creating a feedback loop,” Isbell said, referring to how AI could affect policing. “One can imagine similar issues with [relying on AI] for a job or credit-worthiness, or even face recognition and automated driving.”

In September, seven members of Congress sent a letter to the Federal Trade Commission, the FBI and Equal Employment Opportunity Commission asking federal officials to investigate whether the growing use of AI in hiring–as well as in other areas, such as surveillance and commerce–violates the 1964 Civil Rights Act or other statutes by perpetuating bias. The senators–all Democrats, including possible 2020 presidential candidates Sens. Elizabeth Warren and Kamala Harris–wrote that AI “may be unfair and deceptive.”

Expanding the Recruiter Toolkit

Congressional scrutiny hasn’t stopped the HR departments of some of America’s top companies from plunging headfirst into the brave, new algorithm-driven world. The hospitality industry is considered a leader in adapting AI for people-management tasks–a natural evolution since large hotel chains were also pioneers in using machine-learning tools to enhance guest experiences.

At Hilton Worldwide Holdings, the hospitality giant with properties in 106 countries and nearly 400,000 employees, the firm’s commitment to deploying advanced AI tools led to its recent creation of Ally, its chatbot aimed at matching candidates with available jobs and accelerating the hiring process.

“It allows us to quickly say whether this person has the talent or the ability for one of our positions,” says Sarah Smart, Hilton’s vice president for global recruitment. The goal is to save Hilton’s recruiters time while drawing a richer pool of applicants. Hilton is pleased with the results, but Smart stresses that the company doesn’t want machines replacing human judgment.

“We thought about AI as one of the tools in the recruiting toolkit,” she says, noting it’s not a replacement for recruiters but rather a way to make their jobs easier.

The success of Ally has led Hilton to become one of a handful of companies to contract with HireVue, which provides pre-hiring assessments that use facial- and voice-recognition tools to gauge a candidate’s personal qualities, such as empathy, motivation and skill of engagement. HireVue advertises itself as a vendor that specializes in removing bias and tells clients that it goes the extra mile to neutralize any pre-existing bias about race, gender or age that might be built into the original data.

Candidates have reported good experiences with Hilton’s AI-powered processes. HireVue’s system evaluates short videos applicants submit, and Smart says the candidates enjoy having the ability to record the videos on their own time. Hilton measures the Net Promoter Score–a scale of brand satisfaction–among its job applicants and saw that number skyrocket after the machine-based interviewing, which in turn is creating higher-quality hires, she says.

In light of Amazon’s AI misfire, Smart stresses that Hilton’s AI projects benefit from the firm’s global reach and its prior commitment to diversity. “We come to the plate with more gender balance and diversity,” she says, “while [Amazon’s] tool only enhanced a gender imbalance.”

Buyer Beware

The moves into AI by companies such as Hilton, rival Marriott International and high-tech firms like IBM have led to a flurry of consultants and law firms offering advice about how such organizations can avoid discrimination.

Zev Eigen–an MIT-trained attorney who specializes in the intersection of data science and employment law–founded and is the chief science officer for Syndio, an AI-based start-up that aims to identify and solve problems around pay equity.

Eigen explains the potential problems with bias in AI by invoking the now-familiar process that credit-card companies use to detect fraud–by flagging, for example, a charge for gasoline in Texas when the cardholder lives and makes most purchases in California. The AI systems for Visa or Mastercard, he notes, can only make those judgments based on the user’s past behaviors. When firms design AI programs around job seekers, their available data tend to cover only the applicants who’ve already been hired–an incomplete picture.

“These employee-learning systems never see which applicants are being rejected,” Eigen says, which creates the risk of locking in racial or gender bias.

The attorney is also working with a project called Data Tree Data Science, which helps firms identify qualified job applicants with past criminal records. This is a population that could be harmed by AI-rooted hiring, since many firms have routinely, for decades, rejected any applicant who checked the box for a past conviction.

Eigen says he advises companies that are plunging into machine-learning systems to take steps to understand how AI improves–or doesn’t improve–on humans making similar choices. One client told him its new AI-based system for hiring saved the firm $8 million, but his response was: “Compared to what? Did you test any alternatives?” He routinely advises companies to evaluate AI systems by continuing to compare the results to a control group based on human decision-making to better establish a baseline.

Bussing and Bischoff agree that responsibility falls on the HR executives who purchase and implement the new AI systems to question the vendors about what their algorithms actually do and then constantly monitor and evaluate the results–working with the company’s lawyers or consultants when necessary. The ultimate liability for any discrimination, Bussing notes, is going to rest with the company, not its contractor.

“We have to watch the data because we can’t prevent problems until after they manifest,” she says. “What we can do is keep the right perspective.”

The key, she and Bischoff have argued, is understanding that AI algorithms are not producing facts but informed opinions–and sometimes these are based on incomplete information.

At IBM, HR executives say they’re well aware of how a male-designed personality test crushed an incipient revolution in women becoming computer programmers during the 1960s and ’70s. This historical knowledge has informed current work with its popular and widely promoted Watson AI systems.

This fall, Big Blue rolled out what it calls the Adverse Impact Analysis feature for its new AI toolset that the high-tech giant calls IBM Watson Recruitment. The system, which IBM officials describe as a “bias radar,” goes through an organization’s historical hiring data to identify potential unconscious biases in areas such as gender, race, age or education.

When Harvard- and MIT-trained neuroscientist Frida Polli left academics to get her MBA and–besieged by job recruiters–decided that AI could improve the hiring process, she founded start-up pymetrics in 2013. She also made testing for and removing potential bias a key part of her pitch to a client roster that now includes firms like Accenture and Tesla. One of the ways the firm does that, Polli explains, is through its separate database of 50,000 job seekers whose race and gender are known, specifically to test algorithms for bias.

Nevertheless, Polli has a message for HR executives: Buyer beware when it comes to the growing thicket of start-ups.

“I would urge the consumer to approach AI with a healthy skepticism,” she says. “If a vendor says they’re bias-free, ask for documentation.”

Avatar photo
Will Bunch
Will Bunch is a freelance writer based in the Philadelphia region who writes on human resources and other business topics. He can be reached at [email protected].