Major Federal AI Developments Signal a Rapidly Shifting Regulatory Landscape

Recent weeks have brought a series of significant developments in artificial intelligence (AI) policy at the federal level. Two federal proposals could materially reshape the legal and operational environment for AI developers, deployers and platforms—highlighting the speed and breadth of AI regulation activity in 2026, even as states continue to pass legislation regulating all manner of AI technologies.

These major developments are: (1) the introduction of a discussion draft of the Trump America AI Act by Sen. Marsha Blackburn (R-Tenn.), and (2) the administration’s National AI Legislative Framework.

I. The Trump America AI Act – Sweeping Federal AI and Platform Reform Proposal

Blackburn has released a discussion draft of The Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act (the “Trump America AI Act”). The bill is expansive in scope, combining AI-specific requirements with broader reforms to Internet liability, platform governance and digital infrastructure.

Key Elements of the Bill

Repeal of Section 230

  • The bill would repeal Section 230 of the Communications Act of 1934, (18 USC Sec. 230) after a two-year transition period, reversing the broad immunity that has been historically enjoyed by social media platforms and other communications entities over liability for content.
  • This change would significantly expand potential liability for online platforms with potential downstream effects on moderation practices and platform design. Namely, companies may need to adjust moderation policies and methods or reduce reliance on engagement-driven ranking features.

Expanded Liability Framework

  • The bill introduces a negligence standard duty of care that requires AI developers to mitigate reasonably foreseeable harms arising from their systems.
  • The bill also establishes shared liability for both developers and deployers of AI systems and creates a private right of action, limiting the effectiveness of AI developers’ contractual liability waivers and portending significant litigation opportunities given the Section 230 repeal.

Incorporation of Related AI Legislation

The Act consolidates or incorporates elements of several other notable federal legislative proposals:

  • Key elements of the bipartisan Kids Online Safety Act (KOSA) (introduced May 14, 2025), which would impose heightened duties of care, safeguards, disclosures and transparency requirements for services likely to be accessed by those under 13 are incorporated in Title IV.
  • The Transparency and Responsibility for Artificial Intelligence Networks Act (TRAIN) Act (introduced July 24, 2025), which proposes to authorize administrative subpoenas to obtain information regarding AI training data, increasing transparency obligations and potential litigation exposure, is incorporated nearly verbatim in Title XIII.
  • The Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act (introduced April 9, 2025), which proposes to create federal voice and visual likeness rights to address unauthorized AI‑generated replicas, deepfakes and synthetic media, is also a near-verbatim incorporation in Title XII.
  • The Guidelines for User Age-verification and Responsible Dialogue (GUARD) Act (introduced Oct. 28, 2025), which proposes to establish restrictions related to harmful uses of AI companion systems, including age verification and limitations on use by minors (defined as under the age of 18), as well as broader duty-of-care and safety obligations, is also incorporated nearly verbatim in Title V.

Ideological Bias and Neutrality Provisions

  • Includes provisions aimed at mitigating ideological or viewpoint bias in certain AI systems, particularly those procured or used by the federal government.
  • Requires audits, documentation and training related to mitigation of ideological biases and ethics for high‑risk AI systems.

Copyright and Training Data Accountability

  • In addition to the TRAIN Act, the bill focuses on the provenance of training data content and the use of copyrighted materials in AI training and restricts certain unauthorized AI-generated derivative works.
  • If enacted these provisions will influence ongoing litigation by making explicit in federal law that “the unauthorized ... copying ... of copyrighted material for ... training ... artificial intelligence shall not constitute fair use.” By contrast, the National AI Legislative Framework (discussed below) states that “the Administration believes that training of AI models on copyrighted material does not violate copyright laws” and leaves the issue to the courts.

Data Center and Infrastructure Cost Allocation

  • Includes provisions aimed at preventing AI-related infrastructure and energy costs from being shifted to consumers and ratepayers.

Federal Risk‑Based AI Management Framework

  • Establishes a federal framework for evaluating advanced AI systems when regulators deem those systems exceed capability, scale or risk thresholds.
  • Builds a flexible oversight mechanism centered on capability-triggered evaluation as the primary tool for monitoring frontier and high-impact systems.

Although the TRUMP AMERICA AI Act has not been formally introduced and is unlikely to advance in its current form, it provides a useful roadmap of policy directions that may shape future federal proposals and state-level legislation. Many of its provisions—particularly around liability, evaluation frameworks, youth safety and minimal training data transparency—are consistent with themes that enjoy bipartisan support at the federal and state levels and thus are likely to reappear in narrower legislative proposals, even as the Administration pursues a broader strategy aimed at reducing fragmentation through federal preemption of state AI regulation.

II. Trump Administration’s National AI Legislative Framework

On March 20, 2026, the White House announced a new framework directing Congress to act (and not act) across seven administration priorities:

  • Protecting Children and Empowering Parents;
  • Safeguarding and Strengthening American Communities;
  • Respecting Intellectual Property Rights and Supporting Creators;
  • Preventing Censorship and Protecting Free Speech;
  • Enabling Innovation and Ensuring American AI Dominance;
  • Educating Americans and Developing an AI-ready Workforce; and
  • Establishing a Federal Policy Framework, Preempting Cumbersome State AI Laws.

As expected, the Administration’s National AI Policy Framework signals a preference for a federally unified, light-touch regulatory approach to AI development and deployment, with an emphasis on reducing fragmentation across state laws. The framework signals a continued role for existing federal agencies, including the Department of Commerce (DOC) and the Federal Trade Commission (FTC), in shaping implementation through guidance, coordination and enforcement priorities. At the same time, the framework does not contemplate blanket displacement of state authority. It expressly preserves room for states to enforce generally applicable laws in traditional state police-power areas such as child protection, fraud and consumer protection, to govern their own uses of AI in public services and retain authority over zoning and siting decisions affecting AI infrastructure. In addition, there is a push for speed of federal permitting for AI infrastructure while also aiming for no increase in electricity costs for ratepayers.

The framework also favors relying on existing sector-specific regulators and regulatory sandboxes rather than creating a new federal AI regulator, suggesting continued implementation through agencies with subject-matter expertise even as the Administration pushes for broader federal uniformity. Although the Administration’s framework does not itself create enforceable obligations, it provides an important signal about the likely direction of federal AI policy, particularly the possibility of expanded federal preemption, agency-driven oversight mechanisms and continued focus on AI effects on minors.

What This Means for AI Deployers

Together, these developments reflect that the federal government may take a more active role in controlling AI use in the U.S., with a goal of preempting certain state laws that it views as onerous. Among the federal government’s apparent goals are addressing liability protections and, to an extent, allowing states to legislate AI use for specific use cases, likely continuing the current patchwork of state AI laws.

We are also awaiting two federal actions mandated by President Trump’s Executive Order (EO) on “Ensuring a National Policy Framework for Artificial Intelligence.” The EO was published in December 2025 and required two agencies to act within 90 days. That March 11 deadline has now passed, and we are expecting announcements from the U.S. DOC and the Federal Trade Commission.

We are closely monitoring these changes and will follow up with more detailed, industry‑specific analysis addressing how these proposals may affect sectors such as technology, health care, financial services, media and entertainment, and retail.