EU Reaches Consensus over AI Act: What It Means for U.S. Companies

Client Alert

Last Friday, December 8, the European Parliament, the European Council (Council) and the European Commission (Commission) reached a provisional agreement on the Artificial Intelligence Act (AIA). The current draft has not yet been published, but press releases and statements from policymakers indicate that the agreed-upon draft of the European Union (EU) AIA will continue to follow the risk-based approach included in the earlier June version of the AIA. The new draft also includes provisions to create a new European AI Office to enforce the AIA in collaboration with national authorities and a scientific panel of independent experts who will issue alerts on systemic risks and help classify and test the models.

As currently drafted, the AIA will require organizations to meet different obligations depending on whether their AI activities pose the following types of risk:

  • Minimal risk—activities that will require little oversight (presumed by EU officials to be the largest category), including low-level enabling tools such as video-game displays, spam filters and AI recommendation systems.
  • Limited risk—activities that will require transparency notices, such as chatbots, so that users know they are interacting with a machine and can ask to speak to a human.
  • High risk—activities that could potentially impact critical infrastructure (water, gas, electricity, transportation); access to education, employment and welfare benefits; medical systems and safety; certain systems in law enforcement, border control, administration of justice and democratic processes; and biometric identifiers. High-risk activities would be subject to a number of compliance measures, including risk assessments, detailed logging and documentation, security and accuracy testing, and human oversight. To promote innovation, the draft AIA will sponsor regulatory sandboxes and real-world testing of “high-risk” AI systems.
  • Unacceptable risk—banned activities that pose a clear threat to safety, livelihoods and rights of people, including toys that encourage dangerous behavior, “social scoring,” predictive policing, and many uses of biometrics at work and in law enforcement. Final negotiations carved out narrow allowable uses of facial recognition or “remote biometric identification” (RBI) systems for national security matters and serious criminal cases.

The agreed-upon draft imposes specific obligations on general-purpose AI (GPAI) systems. Taking a two-tiered approach, all organizations deploying GPAIs will have to prepare technical documentation, comply with EU copyright law and submit detailed summaries about the content used to train their GPAIs. Larger GPAIs that pose “systemic risk” will have to undergo additional testing, including model evaluations and adversarial testing, and must report on their energy efficiency and any serious incidents. Early drafts indicate that “systemic risk” will be based partly on size measurements, known as “floating point operations” (FLOPS). Commentators believe the cutoff will be 10^25 FLOPS, meaning that any platforms larger than ChatGPT 3.5 would presumptively pose “systemic risk.” The Commission will work with industry, the scientific community, civil society and other stakeholders to develop codes of practice to operationalize GPAI obligations.

While the draft AIA undergoes formal approval through the European Parliament and the Council, the European Commission will launch an “AI Pact” to gain voluntary commitments to implement key obligations of the AIA ahead of legal deadlines. The Commission also will continue to work with the international community to promote guiding principles and codes of conduct, like those approved by the G7 at Hiroshima in October 2023.

Most AIA provisions will become effective two years after approval of the law, and penalties could be substantial, ranging from 35 million euros or 7% of global annual turnover for violations of banned AI applications to 15 million euros or 3% for other violations, and 7.5 million euros or 1.5% for incorrect reporting. Certain provisions, including the rules on GPAIs, will apply after 12 months, while bans on certain uses of AI will apply within six months of the AIA’s entry into force.

Note that obligations under the AIA are intended to interact with the EU Digital Services Act (DSA), which requires online intermediaries and platforms to explain how their algorithms work. The DSA passed in November 2022, and platforms and search engines that reach at least 45 million active users have been subject to the DSA since August 25, 2023, while all other online providers will be required to comply with the DSA beginning February 17, 2024.

While awaiting final approval of the draft AIA, we recommend that organizations develop an internal data governance structure and operationalize new AI guidelines. In the U.S., organizations can follow practical steps in the AI Risk Management Framework published by the National Institute of Standards and Technology (NIST). To help organizations comply with their legal obligations globally, Manatt will continue to monitor the rapidly changing AI legal landscape, including the California Privacy Protection Agency regulations on automated decision-making technology issued just over a week before the EU AIA provisional agreement.

manatt-black

ATTORNEY ADVERTISING

pursuant to New York DR 2-101(f)

© 2024 Manatt, Phelps & Phillips, LLP.

All rights reserved