Cloud-only deployment. A proprietary safety stack the Pentagon contractually cannot override. Cleared OpenAI engineers embedded on-site as a live enforcement mechanism. Those are the terms Sam Altman secured in the March 20 agreement to put OpenAI models on classified networks. I've read enough vendor security whitepapers to know when someone is dressing up a checkbox exercise. This isn't that. These are architectural choices, and they're the right ones.

The obvious counterargument is that no contract survives a determined state actor. Audrey Liang will make that case, and it's a fair point. Governments have a long history of pressuring vendors into quiet compliance. But the OpenAI deal doesn't rely on contract language alone. It relies on architecture. Cloud-only means no local copies of model weights sitting on Pentagon hardware where they can be fine-tuned or interrogated without OpenAI's knowledge. The safety stack being unoverridable means it's not a policy toggle some GS-15 can flip. And embedded engineers aren't auditors who show up quarterly. They're in the room.

Architecture Over Abstinence

Anthropic chose a different path: refuse the Pentagon's terms, hold firm on red lines around surveillance and autonomous weapons, and get blacklisted for it. I respect the principle. Dario Amodei's public commitment to those red lines is genuine, and the employee open letter from Google DeepMind and OpenAI staff calling for industry solidarity wasn't performance. Those people meant it.

But principle without presence is just a press release.

The Pentagon canceled Anthropic's $200M contract and designated it a supply-chain risk. That designation, typically reserved for adversarial foreign suppliers, is now the subject of a federal lawsuit. Whether Anthropic wins in court or not, the practical result is the same: Anthropic's models are being phased out of classified systems used in the Iran theater over 6 months. The work doesn't stop. Someone else picks it up. The question was always going to be who, and under what constraints.

OpenAI's answer is: us, with more guardrails than any previous classified AI deployment. Stated red lines against autonomous weapons. Stated red lines against mass domestic surveillance. Cloud-only so model weights stay under OpenAI's control. A safety stack that can't be patched out by the customer. If you've ever built a SaaS product for a large enterprise client, you know the difference between "we have a policy" and "the API literally won't let you do that." OpenAI chose the latter.

The Vacancy Problem

The CDAO originally awarded $200M ceiling contracts to OpenAI, Anthropic, Google, and xAI for agentic AI workflows in national security. With Anthropic out, that's 3 vendors. One of them is xAI, whose safety record I wrote about 2 weeks ago. It's not great. If you care about responsible AI in military contexts, the worst outcome isn't OpenAI taking the contract with architectural controls. The worst outcome is xAI taking it with none.

Builders know this dynamic. You can refuse the sketchy client and feel righteous about it. But the sketchy client still needs the work done, and the next contractor in line might not even ask about error handling. Sometimes the most responsible thing is to take the job and enforce your own standards in the implementation.

I'm not naive about what the Pentagon wants long-term. Pete Hegseth's language about Anthropic "seizing veto power over military operations" tells you exactly how the defense establishment views AI safety constraints: as obstacles. The pressure on OpenAI to relax those architectural controls will be constant and intense. The March 17 AWS GovCloud partnership for distributing OpenAI models to federal agencies means the footprint is growing fast.

But the alternative to engagement with controls is absence without controls. OpenAI shipped a deal where the enforcement mechanism is baked into the infrastructure, not stapled to a memo. That's how you build safety that survives contact with a powerful customer. Not by walking away from the table, but by making the table itself load-bearing.