Federal Judge Rita Lin ruled in favor of Anthropic on Thursday, granting a preliminary injunction that blocks the US Department of Defense from treating the AI company as a national security supply-chain risk. In her ruling, Judge Lin wrote that "punishing Anthropic โ€ฆ is classic illegal First Amendment retaliation."

Background

The dispute began after Anthropic refused to allow the Pentagon to use its Claude model for autonomous weapons systems and mass surveillance applications. The DoD responded by formally designating Anthropic as a supply-chain risk โ€” a designation Anthropic argued was government retaliation for exercising its free speech rights to decline certain use cases.

Anthropic filed an emergency lawsuit and sought an injunction to block the designation while the case proceeded. A hearing before Judge Lin in the Northern District of California took place on March 24, with both parties presenting arguments.

The Ruling

Judge Lin sided with Anthropic, finding the company had demonstrated a likelihood of success on its First Amendment retaliation claim. The court found the Pentagon's designation was tied directly to Anthropic's refusal to comply with government use-case demands โ€” not to any genuine security concern.

The ruling is a significant early win for Anthropic, though the underlying lawsuit continues. The preliminary injunction means the supply-chain risk designation cannot take effect while litigation proceeds.

Broader Stakes

The case has drawn wide attention as a test of whether AI companies can be compelled โ€” or coerced โ€” by government agencies into enabling applications that conflict with their stated safety policies. A final ruling could set precedent for how the government interacts with AI developers over use-case compliance.