Diana Schroeher, Esq.
The EEOC and the DOJ Issue Warnings to Employers on AI which may Violate the ADA
The federal Equal Employment Opportunity Commission (EEOC) and the U.S. Department of Justice (DOJ) have each issued guidance to employers on the use of Artificial Intelligence (AI), software, and algorithms for hiring and employee performance criteria. The agencies want to ensure that employer use of these decision-making technologies will not cause employers to run afoul of the Americans with Disabilities Act (ADA) and other Civil Rights laws enforced by the EEOC. Employers may inadvertently discriminate against applicants and employees when using AI technologies, by unfairly screening out qualified individuals with disabilities, and/or failing to offer reasonable accommodations. The ADA prohibits discrimination against qualified individuals with disabilities, and requires employers to offer reasonable accommodations to applicants or employees with disabilities, unless doing so would be an undue hardship for the employer.
As part of the EEOC’s ongoing Artificial Intelligence and Algorithmic Fairness Initiative, on May
12, 2022, both the EEOC and DOJ issued their respective guidance documents indicating their concerns that the use of AI technology by employers may violate the ADA. The DOJ issued “Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring.” The EEOC issued “The ADA and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees” and includes an extensive Q&A, which explains how employers’ use of software that relies on algorithmic decision-making may violate the ADA. The EEOC defines “software” as including automatic resume-screening software, hiring software, chatbot software for hiring and workflow analysis, video interviewing software, analytics, employee monitoring and worker management software. “Algorithms” are defined as a “set of instructions that can be followed by a computer to accomplish some end”, including to evaluate and rate individuals at various stages of employment, including hiring, performance evaluation, promotion, and termination. The EEOC adopted Congress’ definition of Artificial Intelligence (AI) - “a machine-based system that can, for a set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” AI can include machine learning, computer vision, natural language processing and understanding, intelligent decision support systems, and autonomous systems.
One example of how an employer’s AI decision-making tools may violate the ADA’s restrictions on making disability-related inquiries and medical examinations is when the information requested reveals a disability or medical condition, or qualifies as a “medical examination”, which are prohibited before making a conditional offer of employment to the applicant.
The EEOC discourages the indiscriminate use of Artificial Intelligence (AI), software, and algorithms for hiring and employee performance-related decisions. The EEOC’s guidance document suggests that employers minimize the chances that these tools will violate the ADA and disadvantage individuals with disabilities by conducting management training; ensuring that the tools used have been designed to be accessible to individuals with as many different disabilities as possible; making sure the tools only measure abilities or qualifications that are truly necessary for the job; ensuring the necessary abilities or qualifications are measured directly (rather than indirectly); and consulting with software vendor to ensure that their product was designed to comply with the ADA to the extent possible.