Legal Risks When Utilizing Artificial Intelligence in Employment Decision-Making
Lately, it feels like artificial intelligence is taking over every aspect of our lives. One area where this is increasingly true is in Human Resources. Machine-driven Human Resources sounds contradictory, but it is happening. There are now machine learning algorithms that can perform recruiting, hiring, performance monitoring, and even terminations without human input. Because this technology is still in its infancy, there has not yet been significant government involvement. However, from the few regulations and court cases we have seen so far, an important issue moving forward will be the extent to which humans must be involved in making decisions about the employment of other humans.
An early case arose in 2018 when an employee of Amazon, Inc. filed a charge with the National Labor Relations Board alleging that Amazon violated section 8(a)(1) of the National Labor Relations Act. The employee had allegedly complained about Amazon’s productivity rates and later faced termination. Although the employee withdrew the NLRB petition, Amazon’s written response submitted to the agency was noteworthy. According to Amazon, its algorithm monitored employee productivity and automatically issued notices when an employee failed to meet the standards. The system could even terminate employees in this manner. However, there was a method to file an appeal with a person in Human Resources.
In January 2024, two Amazon employees filed a class action lawsuit alleging that Amazon’s system for monitoring leaves of absence had automatically terminated their employment despite both being on approved intermittent leave under the Family and Medical Leave Act. According to the plaintiffs, they appealed to the humans in charge who declined to change the AI’s decision.
Responding to issues such as these involving AI use in employment matters, various government agencies have proposed regulations to protect employees. For example, the Equal Employment Opportunity Commission issued nonbinding guidance in 2023 in which the agency recommends that employers utilizing AI tools for hiring take steps to ensure their vendors are checking the algorithms for potential discrimination against any applicants in protected classes. In October 2024, the Department of Labor issued its own guidance related to AI use. According to the DOL, best practices include “ensuring meaningful human oversight for significant employment decisions.” In California, the Civil Rights Department issued proposed regulations in 2024. Among the new requirements were obligations to store all data used by an AI in making decisions and ensure humans are involved in making individualized assessments of applicants’ criminal histories.
State and federal agencies are not alone in their concern about this issue. In 2021, New York City enacted NYC 144 which requires employers using AI to conduct audits of their algorithms and publish the results. The Colorado AI Act requires developers of algorithms to use “reasonable care” to ensure there is no algorithmic discrimination. In California, the Legislature has considered AB 331 and AB 2930, both of which would require regular “impact assessments” of AI tools used in the employment context. Although California has, to date, enacted neither bill, some regulation in this area seems inevitable.
One thing in common among all these various statutes, proposed regulations, and nonbinding guidance documents is the requirement that humans have some involvement in monitoring the performance of the AI to ensure compliance with other employment laws. Employers interested in utilizing any AI software for employment decisions should be mindful of this and ensure there are humans either auditing the systems or participating in any decisions before they become final.
This communication may be considered advertising in some jurisdictions. It is intended to provide general information about legal developments and is not legal advice. If you have questions about the contents of this alert, please contact Kellen Crowe.