AI Has Captured the Attention of the U.S. Public—and Federal Enforcers

Client Alert

Four federal agencies issued a unique Joint Statement this week on enforcement priorities related to “automated systems,” including artificial intelligence (AI). The short, three-page statement reiterated several actions or statements of the Consumer Financial Protection Bureau (CFPB), U.S. Department of Justice (DOJ) Civil Rights Division, Equal Employment Opportunity Commission (EEOC) and Federal Trade Commission (FTC). While the statement’s contents may not be novel, it was unusual to see the leaders of these disparate federal agencies present such a unified front. Therefore, companies should take note and act on the embedded compliance hints and recommendations.

Key Takeaways:

1) Eliminate Racial and Ethnic Bias, Especially Related to Basic Life Needs

In their Joint Statement, the federal agencies acknowledged that AI and other technology bring innovation and efficiency to the market, but the enforcers emphasized that technology also could introduce bias and discrimination. Links within the Joint Statement and closer review provide insight into some hot-button issues for the enforcers. First, the Joint Statement demonstrates that federal concerns sweep more broadly than AI, covering any technology that automates a decision without human intervention. This could include software programs that simply generate reports based on preset key words or criteria, without sophisticated AI. Second, the Joint Statement emphasized enforcement action against companies that use technology to discriminate based on race or ethnicity, as shown by the mission statements of all four agencies and specific examples provided by the DOJ (“disparate impact against Black and Hispanic rental applicants”) and the FTC (imposition of “higher [automobile] borrowing costs on Black and Latino buyers”).

The Joint Statement also highlighted federal efforts to protect basic life needs from discrimination by automated systems. In particular, the agencies noted that their enforcement priorities covered:

  • Housing – See the DOJ’s “statement of interest,” which it filed in a suit against companies using AI to unlawfully screen building tenants.
  • Consumer Credit – See the CFPB circular on AI algorithms in credit transactions, and the FTC’s unfairness case against companies that impose higher borrowing costs on car buyers.
  • Employment – See the EEOC’s technical assistance document, explaining that the Americans with Disabilities Act applies to AI used to assess job applicants and employees.
  • Health Care – See the FTC’s significant concerns, stating, “AI that was meant to benefit all patients may worsen healthcare disparities for people of color.”

2) Focus on Tested, Specific-Purpose Automated Systems

The Joint Statement both touched on federal enforcement priorities and provided advice on the structure of proper automated systems. In this respect, the Joint Statement mimicked guidance already appearing in other policy statements or frameworks, like the White House Blueprint for an AI Bill of Rights, the European Union’s proposed AI Act, and the National Institute of Standards and Technology’s AI Risk Management Framework. The Joint Statement, however, boiled federal structural concerns down to three areas: Data and Datasets, Model Opacity and Access, and Design and Use. We think these concerns translate into several basic practice pointers for companies interested in AI. Specifically, companies can lower their legal risks and reduce federal scrutiny if they build or deploy AI and automated systems that include the following:

  • Clean Learning Data – AI systems need to learn from existing data sets, and federal enforcers highlighted the need to scrutinize learning sets (especially historic data) for elements that could introduce racial bias or other errors.
  • Transparent Models With Predictable Outcomes – Federal enforcers called into question “‘black boxes’ whose internal workings are not clear to most people … even the developer of the tool.” Enforcers obviously want to know how a system works and the predictability or fairness of its results.
  • Specific-Use Designs – The enforcers worried that developers often have “flawed assumptions” about potential users and suggested that specific-purpose systems are safer than general AI tools made available to the public.

Of course, these principles do not cover the entire field of AI-related concerns, and companies should also consider the extent to which their use of generative AI raises legal risks, including as it relates to recent guidance from the U.S. Copyright Office.

Finally, the Joint Statement reminded readers of the extreme remedies the agencies could seek. The FTC in particular referenced a recent settlement requiring a company to destroy both the results of its algorithmic product and the algorithm itself, requiring it to start from scratch to build a compliant AI tool. The other three agencies referenced documents containing similarly onerous potential remedies.

The short Joint Statement issued this week by the CFPB, DOJ, EEOC and FTC may have struck some as a small public relations ploy. But the Joint Statement was unique, voicing common concerns across federal agencies and providing hints and recommendations about how companies can more safely develop and deploy AI and other automated systems.



pursuant to New York DR 2-101(f)

© 2024 Manatt, Phelps & Phillips, LLP.

All rights reserved