AI Wrapped 2025: The Year Hypothetical AI Risks Became Operational Reality

Executive Summary

2025 marked a material shift in the artificial intelligence (AI) governance landscape. While the United States federal government did not enact a comprehensive AI statute, regulatory momentum accelerated at the U.S. state level and in the courts. Collectively, these developments transformed AI regulation from a long-term policy discussion into a near-term compliance challenge for organizations developing and deploying AI systems.

Below, we summarize the key regulatory actions from 2025 and outline what organizations should expect and prepare for in 2026 and beyond.

Recurring Regulatory Themes in 2025

Despite jurisdictional differences, 2025 legislation and enforcement activity repeatedly converged around a common set of risk areas.

  • Healthcare: AI as a Patient-Safety Issue. U.S. state policymakers increasingly approached AI in healthcare through the lens of accountability and consumer protection rather than accepting the argument that innovation would be stymied by regulating in this space. Enacted state legislation activity focused on three primary areas: transparency when AI systems interact with patients, guardrails around AI-enabled mental health tools, and scrutiny of automated decisions affecting access to care or coverage. For more information on AI regulation in healthcare, please read of 2025 AI policy and healthcare priorities.
  • Child Safety: A Primary Legislative Driver. Concerns about AI’s interaction with minors -- through chatbots, companion tools and content-generation systems – became a politically durable driver of AI legislation. When it came to child safety, legislation often favored clear, prescriptive requirements over more flexible, principles-based standards. Please see of one such law in New York and this Manatt Digital Law and Technology Policy Forum covering for more analysis.
  • Deepfakes and Synthetic Media. Deepfakes were also a legislative and regulatory priority in 2025. States moved to prohibit deceptive or nonconsensually displayed AI-generated content, while adding basic disclosure and labeling requirements. In parallel, private industry adopted provenance and content authentication standards. Please see our webinars on “” and the for more.
  • Continuing Debate Over Developer Responsibility. While many state AI laws can be read to apply equally to AI developers and deployers, attempts to pass comprehensive regulatory framework applicable to major AI developers such as frontier models continued to see mixed results. Most notably, California passed the Transparency in Frontier Artificial Intelligence Act (TFAIA) and New York passed the Responsible AI Safety and Education (RAISE) Act, however the laws are stripped-down versions of more ambitious bills. Colorado lawmakers also showed some interest in slowing implementation of the Colorado AI Act, the state’s landmark risk-based AI legislation. See for more analysis. 
Common Compliance Mechanisms

Across industry sectors and multiple U.S. jurisdictions, the same regulatory tools appeared repeatedly:

  • obligations when individuals interact with AI or AI-generated content;
  • requirements addressing the AI system’s purpose, its limitations, and risk mitigation obligations;
  • frameworks clarifying who bears responsibility for AI-driven outcomes.
  • measures designed to detect and prevent potentially harmful uses of AI; and,
  • of certain deployments of AI that do not involve sufficient human monitoring and intervention.

These regulatory provisions and industry standards are increasingly forming the foundation of AI governance.

Federal Regulations Mostly “Dead on Arrival”

With the exception of the TAKE IT DOWN Act, no federal legislation targeting AI was signed into law in 2025. The TAKE IT DOWN Act did not exclusively focus on AI, but it broadly banned the sharing of intimate individuals’ images online without their consent, whether those images are real or digitally created.

Other notable proposals did not make it to the President’s desk, nor even to a vote in either the Senate or House of Representatives, though such bills remain priorities for their boosters and we expect legislative efforts to continue around them in 2026. Such bills included:

  • The NO FAKES Act, S.1367, “a bill to protect intellectual property rights in the voice and visual likeness of individuals, and for other purposes,” was reintroduced in the Senate in April. The bipartisan bill has support from major as well as content creators but has received from civil libertarians. No action has been taken in Congress since the reintroduction.
  • The TRAIN Act, S. 2455, “a bill to create an administrative subpoena process to assist copyright owners in determining which of their copyrighted works have been used in the training of artificial intelligence models,” is another bipartisan bill with no action since its introduction in July.
  • The Kids Online Safety Act (KOSA) has been introduced several times over the past four years, most recently on December 5, 2025, as H.R. 6484,“ to protect the safety of minors on the internet.” However, it seems to be losing momentum, as .
  • The , H.R. 1694, first introduced May 2023 and reintroduced in February of this year, directs the National Telecommunications and Information Administration to study, gather feedback on, and report to Congress about accountability measures for AI systems used in communications technologies. No action has been taken since February.

Courts Added Complexity, Not Closure

U.S. judicial decisions in 2025 underscored that AI-related legal risk, particularly around copyright and training data, remains highly fact-specific and is still a long way to being resolved. Lower federal district courts reached different conclusions on copyright fair use defenses depending on the source of training materials (pirated vs. lawfully acquired), the market impact of the AI tools, evidence of substitution in AI outputs and whether the AI training is “transformative.”

Key developments include:

  • Thomson Reuters v. ROSS (Feb. 2025 D. Del.): Circuit Judge Bibas (sitting by designation) ruled against the AI company ROSS’s legal research tool on summary judgment, finding that its copying and use of Thomson Reuters’ Westlaw Case Notes infringed Thomson Reuters copyrights. The court rejected Ross’s fair use affirmative defense, ruling that Ross’s use was commercial and a market substitute. While this is not a generative AI case, this is the first U.S. court finding that AI training on copyrighted material is not fair use. The case is currently on appeal before the Third Circuit Court of Appeals.
  • Kadrey v. Meta (June 2025, N.D. Cal.): District Judge Chhabria on summary judgment ruled in favor of defendant Meta, finding that Meta’s use of the plaintiffs’ copyrighted books for training Meta’s Llama foundation model was “highly transformative.” The Court held that this alone is not enough to render a use fair and non-infringing, and Judge Chhabria stated that had the plaintiffs brought better evidence of market harm that he would have not found fair use under the copyright market dilution theory. He stated that copyright holders can challenge fair use defenses if they can provide sufficient evidence of market harm that outweighs the transformative aspects of generative AI training. Read our additional on this case for more information.
  • Bartz v. Anthropic (June 2025, N.D. Cal): Also in June, District Judge Alsup granted partial summary judgment to Anthropic on fair use grounds for infringing the plaintiffs’ books for AI training. The dividing line depended on where Anthropic sourced the books it used to train its models: where they were lawfully acquired (hard copies that were then scanned), the use was fair, but where they were obtained from pirated shadow libraries, the use was not fair. Judge Alsup then certified the case as a class action, after which the parties announced a settlement for $1.5 billion, approximately $3,750 per work, the largest copyright settlement in US history. Please read our additional on this case for more information.
  • Garcia v. Character AI (May 2025, M.D. Fla.) May 2025: Though the case is still in discovery and pre-trial motion phase, a key outcome of the May 2025 ruling was the judge’s decision that the Character.AI platform qualifies as a “product” for product liability claims. The court found that the platform is not “pure speech” protected by the First Amendment (or Section 230 of the Communications Decency Act) as the defendants argued, marking a landmark moment in AI litigation.
  • Major Labels vs. Suno and Udio: In October, the major record labels announced the first of what became several settlements in Universal Music Group (UMG), Warner Music, and Sony Music’s copyright litigation against AI music platforms Uncharted Labs (Udio) and Suno. Some claims remain pending. Read our for more detail and on what these settlements and partnerships mean for artists and rightsholders.

For organizations, these developments mean that copyright, and product liability risk related to AI systems can no longer be treated as hypothetical.

What This Means for 2026 and Beyond

Three implications stand out:

  1. Multi-jurisdictional AI compliance is now a baseline expectation. Even absent U.S. federal AI law, U.S. state legislation, and global frameworks (in particular, the EU AI Act) are creating de facto standards, particularly for healthcare, consumer-facing AI. and generative AI systems. Though the Trump administration is trying to , as well as pushing back against the EU AI Act, the impact of those efforts is uncertain, and most organizations still need to comply with existing laws.
  2. Scalable and flexible governance will outperform reactive compliance. Organizations will increasingly need to invest in reusable compliance measures, including standardized disclosures, system documentation, incident response protocols, and AI-specific vendor oversight protocols. A dynamic regulatory environment and rapidly changing technological capabilities further underline the need for flexible governance frameworks.
  3. AI governance is becoming an enterprise-wide function. Regulatory expectations now span legal, technical, procurement and operational teams. Regulators and courts are converging on a core question: who owns the system, how is risk managed, and what happens when it fails?

2025 was not the year AI regulation fully arrived, but it was the year that preparation became unavoidable. Organizations that treat AI governance as a core compliance function, rather than a future policy issue, will be better positioned as regulatory enforcement and cross-border requirements continue to mature.

How Manatt Can Help

Manatt’s advises organizations in a wide variety of industries on AI governance, model evaluation, procurement oversight and ongoing compliance across U.S. and global regulatory regimes. In addition, Manatt helps clients license content to AI companies for AI training as well as bringing claims for misuse of clients’ content, including copyright, business data, and personal data. We help clients translate emerging AI rules and law into practical, scalable compliance programs aligned with their operational realities.

For questions about AI regulation, litigation, compliance readiness, AI governance and risk management, or AI training data licensing and protecting an organization’s content, please contact the authors or your Manatt relationship partner.