AI-Assisted Hiring Faces a New Compliance Landscape in 2026: California and Illinois Put Discriminatory Impact and Transparency Front and Center
Executive Summary
State regulation of artificial intelligence (AI) has often been fragmented and aspirational, though this has been changing quickly in the employment context. California and Illinois have now adopted enforceable regimes that directly regulate employers’ use of AI-assisted hiring and employment tools, with a shared focus on discriminatory outcomes, documentation, and transparency.
California’s amended Fair Employment and Housing Act (FEHA) elevates anti-bias testing and related proactive efforts as central evidence in discrimination investigations and litigation. Illinois’s House Bill (HB)-3773 goes further by expressly treating the use of AI that results in discriminatory effects as a civil rights violation and by imposing affirmative notice obligations when AI is used in employment-related decisions. Together, these laws signal a shift from theoretical “responsible AI” principles to concrete compliance expectations for employers.
For organizations using automated decision systems in recruiting, hiring, promotion or other employment decisions, these developments require immediate action, including by understanding where AI is used, evaluating how those systems perform in practice, strengthening governance and vendor oversight and preparing for heightened regulatory and private scrutiny.
A Shift from Policy Debate to Employment Enforcement
While many state and federal AI proposals have stalled or narrowed, employment has emerged as one of the first regulated use cases to see sustained enforcement attention. California and Illinois now stand at the forefront, using existing civil rights frameworks to address AI-assisted decision making in the workplace.
Both regimes reflect a common premise—if AI tools influence employment outcomes, employers are responsible for ensuring those tools do not unlawfully discriminate and for being able to show their work.
California FEHA: Anti-Bias Testing as Evidence
Effective October 1, 2025, California amended FEHA regulations to explicitly address the use of automated decision systems (ADS) in employment. The rules adopt a broad definition of covered systems, encompassing tools that screen, score, rank or recommend candidates, even where humans retain final decision-making authority.
Critically, the regulations make clear that an employer’s anti-bias testing and similar proactive efforts may be considered evidence when evaluating discrimination claims. Regulators and courts are directed to assess whether testing occurred and the testing’s quality, scope, recency, results and the employer’s response to identified risks.
While FEHA does not impose a formal testing mandate, the practical takeaway for employers is that unsupported assurances, vendor representations or one-time reviews will carry little weight when outcomes are challenged.
Illinois HB-3773: Discriminatory Impact and Notice Obligations
Illinois’s HB-3773, which takes effect January 1, 2026, reinforces and expands this trajectory. The law makes it a civil rights violation for an employer to use AI in a manner that results in discrimination under the Illinois Human Rights Act. Unlike many AI statutes that focus on intent or process, HB-3773 squarely centers on effect.
The law also imposes affirmative notice requirements when AI is used for specified employment-related purposes, including recruiting, hiring, promotion and other employment decisions. Recently released draft regulations from the Illinois Department of Human Rights further expand these notice obligations, signaling active enforcement intent.
Together, these provisions mean that Illinois employers must manage discriminatory risk and ensure transparency around when and how AI tools are used.
Converging Themes: Outcomes, Documentation, and Oversight
Although California and Illinois take different regulatory paths, they converge on several core expectations, including:
- Discriminatory impact matters. Both regimes focus on outcomes, not just design intent or internal policies.
- Testing and evaluation are central. Employers are expected to understand how AI systems perform in practice and to reassess as systems evolve.
- Documentation and transparency are critical. Disclosure, recordkeeping, retention of AI Human Resources (HR) system data and evidence of responsive action are essential to defense.
- Responsibility cannot be outsourced. Employers remain accountable for AI tools provided or operated by vendors, staffing agencies or third parties.
In investigations and litigation, regulators are likely to look beyond system outputs to whether employers understood and appropriately managed the systems they chose to deploy.
What Effective Employer Readiness Looks Like
In this environment, effective compliance requires more than high-level policies. Employers should be prepared to demonstrate that they:
- Assess outcomes across protected groups. Employers should understand whether, and to what degree, an AI HR tool produces different outcomes for different groups.
- Enable meaningful comparisons. Evaluations should compare similarly situated candidates, accounting for job-related qualifications.
- Evaluate performance consistency. Testing should examine whether a system operates reliably and consistently across groups, not just in aggregate.
- Span the full lifecycle of the system. This includes pre-deployment evaluation using historical data, monitoring during pilot or initial rollout and ongoing post-deployment review as systems, data and job requirements evolve.
- Integrate with AI governance. Testing should sit within a broader governance framework that addresses permissible use, impact assessments, documentation, transparency and third-party risk management.
- Align with existing anti-discrimination programs. AI-related evaluation should complement, not replace, traditional equal employment opportunity and compliance efforts.
Isolated or one-time testing, especially testing conducted solely by vendors without employer oversight or sufficient transparency, will be increasingly difficult to defend.
What Employers Should Do Now
Employers operating in California, Illinois, or both should act now to prepare for this new enforcement landscape:
- Inventory automated decision systems. Identify where ADS tools are used across all areas of employment, including recruiting, screening, assessment and promotion.
- Build or adapt governance processes. Ensure governance frameworks explicitly address evaluation and monitoring of employment-related AI.
- Establish legal oversight. Testing and evaluation should be conducted or supervised in a way that supports defensibility and privilege where appropriate.
- Prepare for notice obligations. Illinois employers should assess where disclosures will be required and how they will be operationalized.
- Strengthen vendor oversight. Contractual terms should support transparency, access to information and the ability to evaluate system outcomes.
- Begin structured testing. Employers should not wait for enforcement activity to assess how their systems perform in practice.
How Manatt Can Help
Manatt’s and teams helps employers operationalize emerging state requirements in a way that is practical, defensible, and aligned with employment and civil rights law. We work with organizations to:
- Design evaluation and monitoring programs aligned with FEHA and Illinois Human Rights Act expectations;
- Conduct AI assessments under attorney-client privilege, where appropriate;
- Integrate AI oversight into employment compliance, investigations and response strategies;
- Support vendor diligence, negotiation and contracting for AI-assisted hiring tools; and
- Prepare for transparency and notice obligations tied to AI use in employment.
As regulators move from AI policy debates to concrete enforcement, employers that invest now in disciplined disclosure, evaluation, governance and documentation will be best positioned to manage risk.