Anthropic and Pentagon Face Off in Court Over AI Safety Red Lines
A federal judge heard arguments on Monday in Anthropic's emergency lawsuit against the US Department of Defense, setting up what could become a landmark ruling on how the government can treat AI companies that refuse to comply with its use-case demands.
The backstory
The dispute began when Anthropic refused to allow the Pentagon to deploy Claude for two purposes: fully autonomous lethal weapons without human oversight and mass domestic surveillance. The DoD, arguing that Anthropic's conditions placed too much power in private hands, responded by formally designating the company a "supply-chain risk" โ a designation typically reserved for foreign firms with ties to adversarial governments. It was the first time an American company has received this label.
The designation bars defense contractors from working with the Pentagon if they use Claude. President Trump also ordered all federal agencies to drop Anthropic's tech within six months.
In court
Anthropic filed suit in a California district court, arguing the designation violated its First Amendment rights (punishment for its AI safety stance, a viewpoint on a matter of public significance) and Fifth Amendment protections. Multiple agencies have since cut ties: the General Services Administration terminated its OneGov contract, effectively removing Anthropic from all three branches of government.
Judge Rita Lin heard arguments on March 24. A ruling is expected within days.
Why it matters
This case could define whether a US administration can economically coerce AI developers into dropping safety guardrails by threatening to revoke government contracts. A ruling for Anthropic would set a floor of protection for AI safety policies; a ruling against it could reshape how frontier labs negotiate with the government on every future deployment.