On February 24, 2026, Defense Secretary Pete Hegseth gave Anthropic a choice: remove your restrictions on lethal autonomous weapons and domestic mass surveillance, or lose everything. Anthropic said no. Nine days later, the Pentagon designated it a "supply chain risk" and ordered 18 federal agencies to immediately stop using Claude. The government's stated reason was fear of future sabotage. The timeline tells a different story.

U.S. District Judge Rita Lin read that timeline carefully. Her March 26 preliminary injunction, 43 pages, called the government's actions "Orwellian" and "arbitrary and capricious," found likely First Amendment retaliation, and blocked enforcement. The Justice Department's own lawyer had argued in court that the designation was based on fears Anthropic might "manipulate" its software or install a "kill switch." The company had held the same safety restrictions since March 2025, during which time the Pentagon awarded it a $200 million contract, granted Top Secret facility clearance, and deployed Claude on classified systems supporting the ongoing war with Iran. The sabotage theory arrived precisely when Anthropic refused to expand military use. Coincidences that convenient deserve scrutiny.

The Ultimatum Was the Policy

Hegseth's January 2026 memo mandated "all lawful use" of AI across the Defense Department, explicitly voiding prior restrictions on lethal autonomous warfare and mass domestic surveillance. That memo was not a procurement guideline. It was a loyalty test distributed to every AI vendor with a government contract. OpenAI, which took the Pentagon deal Anthropic declined, is now the answer to the question of what compliance looks like.

Dario Amodei told CBS in late February that Anthropic is "a good judge of what our models can do reliably and what they cannot do reliably." That is a reasonable position for a company that built the system. The Pentagon's counter-position, implicit in the Hegseth ultimatum, is that the vendor's judgment about its own product's failure modes is irrelevant to procurement decisions. The government wants the capability and considers the safety assessment an obstacle, not information.

I'll grant the military's point that federal law already prohibits certain uses of AI in warfare. But a statutory prohibition and a vendor-level technical restriction are not the same thing. One requires a human to decide not to cross a line; the other makes crossing it harder by design. The Pentagon's insistence on removing the second layer suggests it wants the option, not just the capability.

Who Pays When the Compliant Vendor Wins

The practical outcome of the Hegseth ultimatum, if it had survived judicial review, would have been a defense AI market sorted by willingness to remove safety restrictions. Anthropic, valued at $380 billion and the only AI firm with classified deployment before this dispute, would have been effectively expelled from public sector work. The vendors who remained would be the ones who said yes. That is not a procurement outcome. That is a selection mechanism for a particular kind of AI development culture, one where the customer's demand for unrestricted capability overrides the builder's assessment of risk.

Judge Lin's injunction holds for now. But the administration's legal theory, that a company's public safety stance constitutes a supply chain threat, is still being argued in a separate D.C. appeals court case. Congress has not moved to prohibit viewpoint-based procurement blacklisting. The executive branch still controls the contract.

Anthropic didn't lose the Pentagon contract. The Pentagon tried to make an example of a company that said its AI shouldn't decide who dies. The judge stopped it. The question is whether anyone in Congress noticed what was being attempted.