Sam Altman announced a classified military AI deal on February 28, 2026, and then clarified its terms two days later via a post on X. Not a congressional hearing. Not a federal register notice. A tweet. That is the current state of public oversight on a contract worth up to $200 million that puts frontier AI on Pentagon networks.
I cover consumer tech, so I know what it looks like when a company ships a product with a half-finished safety story. You get a chatbot that confidently invents prescription drug interactions, or a smart lock that a kid with a paperclip can open. The fix is always the same: someone outside the company has to actually check.
"We Promise" Is Not a Guardrail
OpenAI's safety stack for this deal has 3 stated limits: no mass domestic surveillance, no directing autonomous lethal weapons, no high-stakes social-credit decisions. Cloud-only deployment, cleared engineers in the loop. Those sound reasonable. The problem is that nobody outside OpenAI and the Pentagon can verify any of them. The contract text is undisclosed. The enforcement mechanism is Pentagon-interpreted law. When the law changes, the guardrail moves with it, and neither OpenAI nor the public gets a veto.
Anthropic, for all its holier-than-thou energy in recent weeks, had the right instinct: they wanted vendor-enforced redlines baked into the contract itself. The Pentagon called that a supply-chain risk and told federal agencies to stop using their products. Dario Amodei called OpenAI's approach "safety theater." That's a CEO beef, sure, but the underlying point stands up.
To be fair to OpenAI: classified environments have legitimate reasons to limit what gets disclosed publicly, and some form of internal oversight is better than none. I get that. But "better than nothing" is a terrible bar for a military AI deployment in 2026.
When the Process Is a Post on X
Here's what bothered me most. The deal was negotiated in "just a few days," per Altman himself. Then 98 of his own employees signed a solidarity letter backing Anthropic. Caitlin Kalinowski, who ran OpenAI's hardware division, resigned over domestic surveillance concerns on March 7. The company's response to all of this was a memo posted publicly on social media.
That is not a governance process. That is crisis comms.
The pressure that actually forced amendments to this deal was public backlash, not any formal review mechanism. Claude hit No. 1 in the App Store because consumers were spooked enough by OpenAI's announcement to switch apps. App Store rankings should not be how we audit military AI contracts.
Senators Markey and Van Hollen sent a warning letter on February 27 about coercive Pentagon procurement tactics. That was the right instinct. Congress should go further: require an independent technical auditor with security clearance to review any AI deployment of this scale, publish redacted contract terms with enforcement triggers, and hold hearings before the next deal gets inked, not after.
The precedent being set right now is that self-certification is enough. Every AI company watching this will take note. If OpenAI can close a classified Pentagon contract in a few days with guardrails nobody can verify, why would any competitor hold out for stricter terms? You'd just get labeled a supply-chain risk and lose federal revenue.
Congress has maybe one cycle to draw a line here before "trust us" becomes permanent policy. I've spent enough money on gadgets with great spec sheets and broken follow-through to know: if nobody's checking, nobody's checking.