EEOC Title VII guidance released last month highlighted the need for employers to partner with a trusted pay equity software provider. 

The technical assistance document outlined the application of Title VII of the Civil Rights Act of 1964 to automated systems. It aims to prevent discrimination caused by AI to job applicants and employees, including areas affecting pay equity. 

Ensuring responsible AI

The rapid rise of AI, while creating significant opportunities, has more recently raised ethical concerns, due to its ability to embed bias, leading to workplace discrimination. Concerns include:

  • In 2018, Amazon ended its use of an AI recruiting tool that had developed bias against women. A year later, it was revealed that Amazon was using AI in making decisions to terminate employment contracts. 
  • In May 2022, Harvard Business School professor Tsedal Neeley published a case study urging tech companies to consider the potential risks of bias in AI based systems without effective regulation in place. 
  • A 2022 IBM study found that nearly three quarters of organizations are failing to reduce unintentional bias in their AI solutions. Further, 60% were not developing ethical AI policies. 
  • Last month, President Biden announced a series of measures to promote responsible AI

Accenture defines responsible AI as:

“….the practice of designing, developing, and deploying AI with good intention to empower employees and businesses, and fairly impact customers and society—allowing companies to engender trust and scale AI with confidence.”

Issues arise because AI and automated systems are only as good as the data fed into their algorithms, and will embed unconscious human biases within that data as normative. 

Eliminating bias in AI

The EEOC’s guidance is not the first of its kind relating to AI in HR.

  • Illinois’ Artificial Intelligence Video Interview Act came into force on January 1st, 2020. It requires HR to disclose the use of AI in video interviews, as well as the criteria evaluated by the tool and how it works. Job applicants must also consent to the process before it can be used to evaluate their suitability. A similar bill was passed in Maryland, requiring candidates to sign a waiver consenting to the use of AI based video interviews. 
  • In May 2022, the EEOC issued guidance on using AI in evaluating candidates without violating the Americans with Disabilities Act. Again, it highlighted concerns over the use of AI in HR and its potential adverse effects on protected classes. 

Further, on July 5th, 2023, New York City’s “NYC Bias Audit Law,” also known as Local Law 144, comes into force. 

This law requires employers to conduct bias audits of automated employment decision tools (AEDTs) before using them to evaluate either job applicants for employment, or internal candidates for promotion. A summary of the results must be published and both employers and recruitment agencies must provide job applicants with a minimum notice of 10 business days before using AEDTs, explaining both their use and function. Those affected may request an accommodation or alternative selection process. 

New York City is also a leader in the field of pay equity legislation; three pay equity related bills, targeting municipal pay disparities, were signed in March alone. 

Without effective safeguards, employers are at risk of violating Title VII, and NYC’s Local Law 144, during hiring, when managing performance and in determining fair pay. 

Can you trust your pay equity software provider?

EEOC Title VII guidance applies to all employment decisions, including decisions on compensation, and affects employers using AI tools, such as pay equity software. 

The responsibility for the application of ethical automation, however, lies solely with the employer. Title VII guidance makes it clear that employers cannot delegate responsibility for AI bias to their software vendor, nor rely on their vendor’s assurance that its software is Title VII compliant. 

If your pay equity software violates workplace laws, you, as an employer, may be held liable. 

EEOC guidance states that “in many cases”, an employer is responsible under Title VII for its use of algorithmic decision-making tools even if the tools are designed or administered by a software vendor, “if the employer has given them authority to act on the employer’s behalf.” 

Questions for your pay equity software provider

To ensure Title VII compliance, your pay equity software vendor must respond to the following questions: 

  • What steps have been taken to ensure their software does not cause an “adverse” or “disparate impact?” 
  • Has the vendor relied on the four-fifths rule
  • Has the vendor carried out an evaluation of employment-related AI tools to ensure compliance with workplace laws, for example, an audit of all AI functions to identify and remove any potential bias? Can the vendor demonstrate this to the employer? 

Title VII compliance also requires employers to carry out ongoing reviews to determine whether AEDTs may result in workplace discrimination.  The onus is on all employers to act responsibly and take steps to prevent workplace discrimination resulting from their use of AI in HR and AEDTs.

Ensuring ethical automation

Collaborating with a trusted pay equity software provider enables employers to:

Identify and remedy potential bias: A 2022 Cambridge University study into AI concluded that the “attempted outsourcing of ‘diversity work’ to AI-powered hiring tools may unintentionally entrench cultures of inequality and discrimination by failing to address systemic problems within organizations.” Trusaic Payparity finds and remedies the root causes of pay disparities using advanced analytics and algorithms that pinpoint problematic factors, including biases and faulty systemic processes.

Ensure compliance with changing pay equity legislation: Working with a trusted pay equity software provider ensures compliance with EEOC Title VII guidance, the EU’s Pay Transparency Directive, and evolving pay equity laws in the US, including potential national pay transparency legislation.  

Comply with NYC Local Law 144: The following steps can be applied by NYC employers:

  • Review the use of AI in HR in your company to determine whether your organization uses AEDTs in its employment decisions. 
  • Carry out an in depth evaluation of those software tools, including how and for what purposes they are used. This includes using pay equity software to identify bias within your compensation practices.

Prepare for the EU’s AI Act: Concerns over AI are not limited to the US. The EU recently proposed the AI Act, which would consider AEDTs, “high risk applications”, subject to specific legal requirements. On June 14th, the European Parliament approved the text of draft legislation. Given its far reaching changes, organizations are encouraged to act now to implement responsible AI policies, especially in all areas relating to employment and pay equity.  

Start with a pay equity audit: The first step to ensuring compliance is to analyze and understand the causes of pay disparities within your compensation structure. 

Ensure the use of responsible AI in your organization. Partner with Trusaic. 

Get started with your pay equity audit today.

Download: Pay Equity Definitive Guide