On January 3, 2026, Anthropic's Claude was used to help plan a raid into Venezuela. Nobody at Anthropic had validated the model for that purpose. The Pentagon later cited this incident as part of its justification for designating Anthropic a supply-chain risk. Think about the sequence: the military used a tool without authorization, then punished the company for the consequences of that unauthorized use. OpenAI looked at this chain of events and signed up anyway. That tells you more about the deal than any safety stack ever could.

OpenAI's agreement includes stated prohibitions on autonomous weapons and mass surveillance of Americans. It specifies cloud-only deployment so model weights stay under OpenAI's control, a proprietary safety stack the Pentagon contractually cannot override, and cleared engineers embedded on-site. These are real engineering choices, and I grant they represent a more serious approach to military AI constraints than anything the defense sector has seen before. But the question is not whether the controls are well-designed. The question is what happens the first time they block something a 4-star general needs done by morning.

The Contract That Ate the Last Contract

Defense Secretary Pete Hegseth issued Anthropic an ultimatum on February 24: grant unrestricted access to your models by the end of the week, or we sever ties. Anthropic refused. Within days, the Pentagon canceled Anthropic's $200 million contract, applied a supply-chain risk designation typically reserved for adversarial foreign suppliers, and pivoted to OpenAI.

Hegseth's stated rationale was that Anthropic attempted to "seize veto power over military operations." Reread that. A company negotiating contractual limits on how its product could be used, a normal feature of enterprise software licensing, was reframed as an act of insubordination against the military itself. The message to every other AI vendor is unambiguous: your red lines are tolerated until they become inconvenient.

OpenAI says its safety stack cannot be overridden. But override is not the only failure mode. Renegotiation is. Reinterpretation is. Hegseth's January memo mandating "all lawful use" of AI already voided Anthropic's negotiated restrictions on lethal autonomous weapons. OpenAI's current prohibitions exist inside the same legal and political environment that dissolved Anthropic's. The architecture is different. The incentive structure pressing against it is identical.

Who Audits the Auditors

The embedded engineers are the most interesting feature of the deal, and also the most fragile. OpenAI employees with security clearances, sitting inside classified facilities, are supposed to serve as a live enforcement mechanism. But those engineers work for a company that just guaranteed private equity partners a 17.5% minimum return on a joint venture tied to this contract. Their employer's revenue depends on the Pentagon staying happy. What does "enforcement" look like when the person you're supposed to constrain is also your biggest customer?

Pentagon research chief Emil Michael raised his own concerns on March 6 about dependency on any single AI tool, warning of model poisoning, hallucinations, and rogue developers. An Air Force Research Laboratory paper published this month in Cell found that large language models homogenize military thinking and erode critical analysis. The Pentagon's own researchers are flagging cognitive risks from exactly the kind of rapid, under-validated AI deployment that this deal accelerates. Nobody appears to be listening.

Brookings scholar Stephanie Pell called the whole episode "a terrible way to make public policy," decided through a personal fight between Dario Amodei and the defense establishment. She is right, but the problem runs deeper than process. The structural dynamic now in place is one where the Pentagon has demonstrated, publicly, that it will destroy a vendor's federal business for maintaining safety constraints. Every future negotiation between a frontier AI company and the Department of Defense will happen in the shadow of that demonstration.

OpenAI's controls may hold for a year. Maybe 2. But the company now operates inside a relationship where the client has already shown, with Anthropic, exactly what happens to partners who refuse to bend. Cloud-only deployment protects model weights from tampering. It does not protect a company's judgment from the slow, compounding pressure of financial dependence on the one client that never loses an argument.