EEOC Hearing on Employment Discrimination in AI

By: Bre Timko and Dave Schmidt 

On January 31, 2023, the Equal Employment Opportunity Commission (EEOC) had a hearing, entitled “Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier. This hearing, which included over four hours of panelist presentations and responses to EEOC Commissioner questions, had nearly 3,000 attendees. Panelists included lawyers, advocacy representatives, industrial and organizational (I/O) psychologists, and computer scientists, and spanned both academic and applied roles in public and private sectors. A recording of the full hearing can be accessed here and written testimony from each panelist can be found here. Key takeaways from the hearing are outlined and discussed below. 

1. More guidance is needed from EEOC on the use of AI in selection.  

Panelists agreed that more guidance from EEOC is needed in this space. Particular areas of recommended focus include 1) an employer’s duty to explore less discriminatory alternative selection procedures, 2) the legal responsibilities of labor market intermediaries that procure workers for employers and employment agencies, and 3) the application of Title VII of the Civil Rights Act, the Age Discrimination in Employment Act (ADEA), and the Americans with Disabilities Act (ADA) to technology-driven AI tools. AI developers and vendors look to EEOC guidance for information on what actions to take and often may not apply a broader lens unless specifically recommended. Panelists also suggested considering a cross-agency (Department of Labor, Department of Justice, etc.) coalition to create a team of experts to develop such guidance. 

2. Statistical relationship evidence is not enough; job relatedness and explainability must be present. 

Another common theme throughout the hearing was that evidence that demonstrates a statistical correlation between assessment scores and outcome data (e.g., job performance) is not necessarily enough to justify the use of these tools in employee selection. This issue is particularly relevant when tools are difficult to explain. This is not a new concept and is something I/O psychologists have contemplated broadly across selection procedures. While criterion-related evidence is important and frequently used to support AI tools, it is also critical to show that these tools measure job-related skills and attributes. A thorough job analysis can be used to provide this evidence, as well as to identify important work outcomes that can serve as criterion measures. Panelists suggested that the features of the model should be explainable. If a user cannot explain why the feature is important to the target role, it should not be included in the algorithm. 

3. Audits and transparency should be the norm. 

Transparency has not always been a standard practice in the AI space, as vendors may be apprehensive about giving away too much about their company’s protected intellectual property. However, various local and state laws now require some transparency in the use of AI tools, and more transparency was a theme from the hearing. Employers and applicants are calling for increased transparency surrounding the use of AI tools in employee selection, and AI vendors are seeking to build trust in their tools by releasing more information about how their algorithms function and the inputs (features) their algorithms use to make decisions. In other words, organizations should have enough information to reasonably evaluate compliance, and applicants exposed to AI hiring systems should be provided with information sufficient to understand whether and how they are being assessed by an AI tool or to determine if they require an accommodation. To encourage this, various panelists urged EEOC to mandate employer audits of any automated hiring systems in use, to develop its own automated governance tools that could be used to audit automated hiring systems, and to certify third-party providers offering audits. Panelists recommended looking to other fields, such as finance, that involve regular audits to learn from their guidelines and requirements. While this may require substantial effort by the agency, it has the potential to increase the consistency and effectiveness of audits longer-term. 

4. The effects of protected class characteristics and associated proxies cannot be ignored.  

Well-designed AI tools have the potential to reduce bias, but they are not inherently neutral. In fact, panelists argued that the most effective strategies for de-biasing algorithms require paying attention to protected class characteristics when building the model. AI vendors must use information about such characteristics to identify features that may need to be down-weighted or removed from the model due to large group differences. While directly incorporating protected class characteristics into the deployed model itself is problematic and should not be done, it is important to recognize that there may be proxies for discrimination that are more covert. For example, inputs such as zip code may serve as a proxy for race discrimination. One panelist provided an example of this when describing an instance where a system was trained using data from top performers. It found some of the most effective features differentiating top performers to be the name “Jared” and whether the person played high school lacrosse indicating these variables may be proxy variables for race, gender, and socioeconomic status. Panelists also noted that using profiles of current high-performing employees may end up replicating the current workforce, including any existing protected class characteristic disparities, and may identify traits that are not job relevant. These methods should be used with care and oversight, and the foundation of a job analysis and critical review of the features for job-relevance play an important role.  

5. Cross-disciplinary teams should be involved in developing, validating, and monitoring AI tools. 

Panelists noted the importance of a multi-background team involved in development, validation and monitoring of AI assessments. Multiple panelists and EEOC commissioners stressed the importance of involving I/O Psychologists in all phases of this process. Panelists also recommended that EEOC hire individuals with specific expertise in data science/machine learning and in I/O psychology to collaborate with current EEOC staff and attorneys to develop feasible parameters surrounding the use of AI-based algorithms. This underscores the necessity of cross-functional teams that can collaborate to ensure that AI tools not only meet legal requirements, but that they do so fairly and in accordance with professional standards across relevant disciplines. This is especially important in the AI space where the general rule is that more data leads to more accuracy. As such, audits of algorithmic performance should not just happen at the outset but should be conducted regularly and independently as algorithms acquire more data and can potentially be refined to develop more accurate predictions.  

The EEOC hearing was an engaging and insightful conversation that incorporated diverse, multidisciplinary perspectives. EEOC’s hosting of this hearing (which was part of EEOC Chair Burrows’s Artificial Intelligence and Algorithmic Fairness Initiative) and receptivity to the topics discussed demonstrates the agency’s commitment to advancing policy and guidance in the use of AI tools in the employee selection space. We will likely see more to come from EEOC and other federal agencies in the near future. Stay tuned as DCI continues to provide updates in the ever-evolving intersection of AI and hiring practices. In the meantime, additional information about other recent developments in the AI space can be found here. 

If your organization is interested in learning about more bias audits of automated employment decision tools, please contact aholmes@dciconsult.com.

Stay up-to-date with DCI Alerts, sign up here:

Advice, articles, and the news you need, delivered right to your inbox.

Expert_Witness_1st_Place_badge

Stay in the Know!