The morning of the Pentagon deal, Sam Altman posted that he shared Anthropic's red lines on autonomous weapons and mass surveillance. Hours later, OpenAI signed a classified military contract with no standalone prohibitions on either. That is not a nuanced position. That is a press release and a contract that contradict each other before lunch.
Here is what actually happened. Anthropic had a $200 million Pentagon contract with explicit bans baked into the terms: no mass domestic surveillance, no fully autonomous weapons. Defense Secretary Pete Hegseth issued a January 2026 memo demanding an "any lawful use" clause. Anthropic said no. The Pentagon canceled the contract and banned Anthropic government-wide. OpenAI stepped in within hours. xAI got approved the same week with what the research describes as "blank check" access for Grok in classified settings.
So the question is not whether OpenAI crossed some abstract ethical line. The question is whether "any lawful use" actually protects anyone, and the answer from Jessica Tillipman at George Washington University is pretty clear: it does not. OpenAI's terms give the Pentagon no free-standing right to refuse otherwise-lawful government use. Laws change. Administrations change. What is lawful in 2026 under Pete Hegseth is not the same universe as what was lawful in 2022.
The Part Where I Give the Other Side a Fair Shot
The pragmatist argument is not stupid. If OpenAI walks away, xAI and its blank-check Grok fill the gap immediately, with zero guardrails and zero public accountability. At least OpenAI has a public-facing safety team and some reputational skin in the game. I get it. Except that logic only works if OpenAI's "layered protections" are actually layers and not just vibes. Once Claude or GPT-4o is running on a classified network, Anthropic's own April 22 court filing admits it has zero ability to manipulate or monitor what happens to its model. OpenAI is in the same position. The protections disappear exactly where the risk is highest.
The thing that bugs me most, as someone who writes about tech for people who actually buy and use it: this is the AI equivalent of a company advertising "no hidden fees" and then burying the fees in the terms of service. Altman's public statement about red lines was marketing. The contract was the product. They were not the same product.
What Should Actually Change
OpenAI should publish its classified deployment terms, or at minimum a public summary of what the Pentagon can and cannot do with its models. Full stop. If those terms include real prohibitions on autonomous targeting and domestic surveillance, say so explicitly. If they do not, stop telling journalists you share Anthropic's values.
The "QuitGPT" backlash and internal employee dissent are real, but consumer pressure alone will not fix this. Congress needs to require that any AI company with classified military contracts disclose the ethical guardrails in those contracts, even in redacted form. The public paid for the Pentagon's budget. They deserve to know whether the AI running on it can be pointed at them.
Anthropic's legal challenge, filed April 22 in the U.S. Court of Appeals, argues the ban is politically and commercially motivated. That might be true. The Trump administration is already softening on Anthropic as agencies push back. But the lawsuit does not fix the underlying problem: right now, the only company that tried to write enforceable limits into a military AI contract got banned for it, and the companies that said "whatever you need" got the work.
Altman said he had red lines. He should prove it before the next memo changes what "lawful" means.