Anthropic Sues Pentagon Over Unprecedented 'Supply Chain Risk' Blacklist
Anthropic has filed two federal lawsuits against the Department of Defense, escalating a clash over AI safety red lines into one of the most significant legal battles between a tech company and the U.S. government.
The conflict began when Anthropic CEO Dario Amodei refused to let its Claude models be used for mass surveillance of Americans or to power fully autonomous weapons without human oversight. Defense Secretary Pete Hegseth argued the Pentagon should have AI access for "any lawful purpose" with no private contractor limits.
After Anthropic held its ground, the Pentagon issued a "supply chain risk" designation on March 5 โ a label typically reserved for foreign adversaries. The designation forces any company or agency working with the Pentagon to certify it doesn't use Anthropic's models. The General Services Administration then terminated Anthropic's "OneGov" contract, cutting off Claude access across all three branches of the federal government.
President Trump called Anthropic a "radical left, woke company" and directed all federal agencies to phase out its tools. Meanwhile, OpenAI moved quickly to fill the gap, announcing it had secured a Pentagon deal โ a move that drew backlash from the developer community and pushed Claude to the top of the App Store.
Anthropic filed complaints in the Northern District of California and the U.S. Court of Appeals for the D.C. Circuit on March 9, calling the Pentagon's actions "unprecedented and unlawful." The lawsuit argues the government violated the First Amendment and broke federal contracting law by skipping required notification and review procedures.
As of March 12, Anthropic has separately asked the appeals court for an emergency stay to block enforcement while the case proceeds. The company says the designation could jeopardize hundreds of millions of dollars in revenue.