Regulators Consider Employer Use of AI in Employment Decisions

Employment Law


The legality of the use of artificial intelligence (AI) in employment decision making should be on the radar for employers, as multiple regulators are addressing the issue.

Last fall, the Equal Employment Opportunity Commission (EEOC) launched an initiative to ensure that AI “and other emerging tools” used in hiring and other employment decisions comply with federal civil rights laws.

“Artificial intelligence and algorithmic decision-making tools have great potential to improve our lives, including in the area of employment,” EEOC Chair Charlotte Burrows said in a statement. “At the same time, the EEOC is keenly aware that these tools may mask and perpetuate bias or create new discriminatory barriers to jobs. We must work to ensure that these new technologies do not become a high-tech pathway to discrimination.”

Taking a multipronged approach, the agency announced plans to establish an internal working group to coordinate its efforts; begin a series of listening sessions with key stakeholders about algorithmic tools and their employment ramifications; gather information about the adoption, design and impact of hiring and other employment-related technologies; and identify promising practices.

Perhaps most important for employers, the EEOC said it will issue technical assistance to provide guidance on algorithmic fairness and the use of AI in employment decisions.

States and cities are similarly considering the issue.

In California, draft regulations on AI and employment decisions were released in late March by the Fair Employment and Housing Council.

The proposal would update existing state regulations to include a new technology, dubbed an “automated decision system” (ADS), which is defined as “a computational process, including one derived from machine[]learning, statistics, or other data processing or artificial intelligence techniques, that screens, evaluates, categorizes, recommends, or otherwise makes a decision or facilitates human decision making that impacts employees or applicants.”

Examples of an ADS include an algorithm that screens resumes for particular terms or patterns; an algorithm that employs face and/or voice recognition to analyze facial expressions, word choices and voices; an algorithm that employs gamified testing that includes questions, puzzles or other challenges used to make predictive assessments about an employee or applicant to measure characteristics including but not limited to dexterity, reaction time, or other physical or mental abilities or characteristics; and an algorithm that employs online tests meant to measure personality traits, aptitudes, cognitive abilities and/or cultural fit.

Pursuant to the proposal, it would be illegal for an employer to use qualification standards, employment tests, ADSs or other selection criteria that screen out or tend to screen out applicants or employees based on characteristics protected by the Fair Employment and Housing Act.

The draft does include a carveout for standards, tests and other selection criteria that are “shown to be job-related for the position in question and are consistent with business necessity.”

Comments are currently being accepted on the proposal.

On the other side of the country, New York City has already enacted a measure that restricts the use of AI in employment-related decisions. Passed by the New York City Council in November 2021, the bill became law on December 11, 2021, without Mayor Bill de Blasio’s signature.

The law regulates the use of “automated employment decision tools” on candidates and employees residing in New York City. Such tools are defined as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.”

An employer is prohibited from using AI tools in making employment decisions unless the tool has been subject to a “bias audit” by an “independent auditor” within the prior year and a summary of the audit results and distribution data for the tool have been made publicly available on the employer’s or employment agency’s website.

Individuals also have the ability to request an accommodation from being subject to an AI tool and can request information regarding the data that was collected about them.

Violations of the law can result in a civil penalty of up to $500 for the first violation and between $500 and $1,500 for each subsequent violation.

The law takes effect January 1, 2023.

To read California’s draft regulations, click here.

To read New York City’s law, click here.

Why it matters: Employers should be aware of regulator interest in the issue of using AI in employment decisions, with guidance forthcoming from the EEOC. In New York City, employers should begin preparing for compliance with the new law, while those in California should keep an eye on the regulations from the Department of Fair Employment and Housing, as the possibility of additional jurisdictions weighing in is likely.

manatt-black

ATTORNEY ADVERTISING

pursuant to New York DR 2-101(f)

© 2024 Manatt, Phelps & Phillips, LLP.

All rights reserved