The Coming AI Preemption War: Why “Federal AI Clarity” Is Mostly a Mirage

We are entering a period of messy regulatory conflict over artificial intelligence (AI), which may be the most consequential one any emerging technology has faced. With the December 11 Executive Order titled "," the federal government has made its position clear: it intends to push back hard against state-level AI regulation. See Manatt’s summary of the Executive Order .

For some, this may sound appealing. Washington is stepping in to clean up a chaotic state-by-state patchwork. The new AI Litigation Task Force is framed as a strike force that will neutralize the need to comply with aggressive regimes in places like California and Colorado and restore uniformity.

If you are a general counsel, founder, or venture capitalist in the AI space, be skeptical and take a longer-term view. This is not the beginning of clarity, but rather the start of years of legal wrangling and volatility.

Executive Ambition Meets Constitutional Limits

The Executive Order is a statement of intent, not a statute. Under the Supremacy Clause, federal law can preempt state law. Executive Orders do not do that on their own. Congress, once again, failed to deliver a comprehensive federal AI framework in 2025, which means the administration is left to litigate around the edges.

That litigation strategy relies almost entirely on implied preemption and constitutional theories that are powerful in the abstract but fragile or untested in application.

There are three main pillars.

  • The First Amendment. The Task Force is eyeing laws like the Colorado AI Act, effective June 2026, and California’s Transparency in Frontier AI Act. The theory is that requirements aimed at preventing algorithmic discrimination or mandating “truthful” outputs amount to compelled speech. The Department of Justice (DOJ) will argue that model outputs are expressive speech and that state-mandated guardrails are content-based speech restrictions that cannot survive strict scrutiny under the First Amendment.
    • Counterpoint: While some courts have begun to recognize that algorithmic outputs may have expressive qualities, there is no clear line extending full First Amendment protection to regulatory obligations aimed at safety, transparency or civil rights compliance. States will argue, with some force, that these laws regulate conduct with incidental effects on speech, a framing that has historically survived constitutional challenge. That debate alone is not close to resolution.
  • The Dormant Commerce Clause. This is the blunt instrument. The argument is that AI development is inherently interstate and global. You cannot meaningfully train or deploy a model in a state-contained way. Under Pike v. Bruce Church, any state law with the practical effect of regulating conduct beyond its borders can be struck down if it unduly burdens interstate commerce. Expect DOJ to argue that most state AI regimes fail that test.
    • Counterpoint: In theory, Pike favors challengers. In practice, it is a fact-intensive balancing test that can often favors states. Courts routinely defer when states plausibly characterize laws as consumer protection, civil rights or public safety measures. States will argue that any extraterritorial effects are incidental and not the law’s practical purpose. That might be enough to defeat early challenges and push cases deep into discovery and appellate review.
  • Compute Thresholds. California’s Senate Bill (SB) 53 hinges on training-scale triggers measured in floating point operations per second. The federal response will be that these thresholds are arbitrary and strategically harmful. The argument will be that they single out large U.S. developers while ignoring smaller actors who may pose comparable risks, and that state-level compute triggers intrude on federal national security and industrial policy prerogatives.

None of these arguments are frivolous and none are close to settled law.

The 2026 Compliance Trap

While DOJ staffs up and sharpens its briefs, state regulation is not pausing. As of early 2026, the map is already more fragmented—not less.

California’s SB 53 is live. Large frontier developers with significant revenue must publish safety frameworks and report certain critical incidents on timelines that can be as short as 24 hours.

At the same time, the administration is floating the use of federal funding leverage, including grants for broadband internet access, to pressure states to soften AI laws. That move is almost guaranteed to trigger immediate litigation from state attorneys general. NFIB v. Sebelius looms large here, and states will argue coercion without hesitation.

In Sebelius, the Supreme Court held that Congress crossed a constitutional line by threatening to withhold existing Medicaid funding from states that refused to expand coverage under the Affordable Care Act. The Court characterized that threat as coercive, rather than voluntary, describing it as a “gun to the head.” The key takeaway was that financial inducement becomes unconstitutional when states are left with no real choice.

That precedent favors states in the AI context. If the federal government conditions essential, previously allocated infrastructure funding on dismantling state AI regimes, state attorneys general will sue immediately and argue that broadband funding has, at best, a tenuous connection to AI consumer protection or civil rights laws, and that the threat amounts to unconstitutional coercion rather than permissible encouragement. Courts are likely to take those arguments seriously, particularly in the absence of clear congressional authorization.

None of this resolves quickly short of Congressional action.

Compliance Purgatory Is the Default State

For companies building and deploying AI systems and investors putting capital into the space, this dynamic creates compliance purgatory.

If you pause California safety workstreams or Colorado impact assessments because you assume federal preemption is imminent, you are wagering your company on litigation timelines that historically stretch for years. State attorneys general are not parties to the federal task force. Their enforcement authority remains intact. Penalties are real and, in some cases, severe.

Supreme Court resolution, if it comes at all, is a multi-year proposition. Until then, enforcement risk lives at the state level.

The Bottom Line

Federal preemption today is a legal argument, not an operating environment.

For at least the next 18 to 24 months, companies are not choosing between federal regulation and state regulation. They are operating in a contested zone between the two. The companies that do best will not be the ones betting on Washington or Sacramento. They will be the ones building modular, defensible governance structures that can adapt as the litigation plays out.

Mistaking a federal press release for settled law is the fastest way to get blindsided.

If you have any questions regarding the EO or broader matters concerning AI strategy, please contact your Manatt relationship partner or any member of our .