Littal Shemer Haim

People Analytics, HR Data Strategy, Organizational Research – Consultant, Mentor, Speaker, Influencer

Workforce Data Strategists, Heads-Up: AI Regulation is Here

Should workforce data strategists track AI regulation? Is tracking AI regulation enough for workforce data strategists? How are AI regulations for workforce decisions unique? Are there consequences of neglecting to track AI regulations? The article addresses these questions.
Photography by Littal Shemer Haim ©
(Reading Time: 4 minutes)

AI applications rapidly grow, and humans respond in various ways at all levels, from individuals to organizations to states. In this article, I address the issue common to all these levels: The Law. I recently stumbled into a list of legislation trackers covering countries’ efforts worldwide to respond to AI growth in laws, regulations, standards, and guidelines. Awareness of legislative policies is insightful to broadly understanding the development of the AI landscape in diverse regions. However, I discuss AI in the domain of workforce data and explore aspects that workforce data strategists need to focus on. Therefore, in this article, I answer questions about workforce AI regulation.

Should workforce data strategists track AI regulation?

Experts advise tracking AI regulations to ensure compliance and avoid costly fines for non-compliance. Furthermore, whether the organization intentionally or inadvertently uses AI tools, leaders should reassess practices, policies, and training to keep pace with new regulations. This advice is crucial because while most executives consider increasing their investments in AI, less than half established policies on responsible use.

Rules for AI in the workplace are already here. For example, New York City has passed a bias audit law for automated employment decision tools. It makes it illegal for employers to use AI for hiring and promotion decisions without meeting specific requirements, including a published bias audit and notifying candidates and employees about the use of AI. As other states also advance AI regulations, you can’t neglect to understand the ongoing legal development.

Is tracking AI regulation enough for workforce data strategists?

Understanding the new and coming laws is insufficient to ensure a legal and responsible use of AI. Exploring how technologies are subject to the law in your regions and working with vendors is essential. Furthermore, partnering with legal departments on compliance challenges may be deficient in some organizations, so bias-audit firms are already popping up like mushrooms after a rain. Unsurprisingly, traffic in the HR compliance software category also increases as AI regulation expands.

Tracking AI regulation in the workforce domain should, therefore, include not merely the awareness of the legislation in relevant regions but also developing the appropriate protocols and behaviors likely to satisfy the legislation as it goes into effect by leveraging new collaboration with tech vendors and consultants. However, such protocols are not a substitute for carefully deciding which talent decisions should be informed by AI and determining how the organization mitigates ethical risks of AI beyond the current legislation. Until regulations govern the use of AI in the workplace, it is essential to follow public guides and even vendor perspectives to understand the responsible use of AI in workforce decisions.

How are AI regulations for workforce decisions unique?

AI regulations for workforce decisions address bias, discrimination, and malpractice of employers. Although the war for talent sometimes implies that people may have the upper hand, organizations have significantly higher power in critical moments along the employee lifecycle, and AI increases the gaps in power relations. Therefore, AI regulation focuses explicitly on preventing potential harm in workforce decisions, particularly when they contradict values and benefits for all.

Some examples of AI regulation, even if not applicable in your region, can give you some idea of how it focuses on bias and discrimination. As mentioned, according to the law NYC passed, employers are responsible for any discriminatory impact of AI when using it in hiring, promotions, or firing. In Maryland, the law requires employers who intend to use facial recognition in job interviews to provide adequate notice and obtain written consent from applicants. In Illinois, the law imposes transparency, consent, and data destruction duties on organizations using AI to evaluate job applicants.

Laws about AI in HR-Tech that have already passed shed light on what might be next in the effort to protect employees and candidates from potential bias and discrimination in AI tools. These laws may not affect your activity. Still, dozens of proposed state laws about AI regulation have focused on AI involvement in HR business functions this year. Tracking legislation in progress is helpful to explore how activities in your region will soon be affected.

Are there consequences of neglecting to track AI regulations?

Unawareness of AI regulation for workforce decisions can lead to non-compliance. It can have severe consequences, including legal and financial penalties, such as lawsuits from employees who feel that AI tools have violated their rights. Non-compliance with AI regulations can also damage reputation and lead to a loss of trust from employees and customers.

But trust issues may occur even if your region’s AI regulation has not matured. The news editions bring us stories about the negative consequences of seeming malpractice for workforce decisions using AI. For example, a recent lawsuit alleges that Workday, a vendor of applicant screening software, engaged in illegal age, disability, and race discrimination by selling its customers the company’s applicant-screening tools, which use biased AI algorithms.

Bias occurred long before, even a decade ago. When Amazon developed its recruiting platform, the AI algorithms were trained to evaluate candidates by resume data of people hired in previous years. But most of those resumes were submitted by men, meaning the tool was automatically biased against women. Amazon took the necessary actions to put away the biased AI; however, we gained an insightful story to be learned in every introductory course for people analytics.

The lesson learned from these stories is the importance of transparency. AI tools can be difficult to understand or interpret, making identifying and correcting discriminatory practices challenging. Organizations must ensure their AI tools are explainable so employees and candidates can understand how decisions are made.

In conclusion, ethical AI in workforce decisions is not a destination but a long journey. Challenges and complexities mark the path to regulating AI tools in workforce decisions. But every step taken, every ethical framework built, and every thoughtful decision made by workforce data strategists will contribute to shaping a more equitable and responsible future. We should be on that path; tracking regulation is only one perspective.

Picture of Littal Shemer Haim

Littal Shemer Haim

Littal Shemer Haim brings Data Science into HR activities, to guide organizations to base decision-making about people on data. Her vast experience in applied research, keen usage of statistical modeling, constant exposure to new technologies, and genuine interest in people’s lives, all led her to focus nowadays on HR Data Strategy, People Analytics, and Organizational Research.

+60 Articles

Thoughts and ideas related to People, Data, Work, and Ethics.

Your browser doesn't support the HTML5 CANVAS tag.

Join many People Analytics enthusiasts, and get new and featured articles delivered
straight to your inbox!

course

The People Analytics Journey, an introductory course for HR professionals, covers real-world use cases of analytics and enables them to be familiar with data science terms and competencies.

All images and texts on this website are copyrighted © Littal Shemer Haim ALL RIGHTS RESERVED

Stay Tuned!

Subscribe and get notified about featured articles weekly