On April 8, the D.C. Circuit Court of Appeals sided with the Pentagon over Anthropic, and the key phrase in the ruling was this: "a relatively contained risk of financial harm to a single private company." A $200 million contract, a blacklisting under the same statute used against Huawei, and the only AI model cleared for classified Pentagon systems, all reduced to a contained risk. The court's framing tells you who holds power here and how little the company's objections weigh against it. That asymmetry is the story, not whether Anthropic played its hand well.
Anthropic refused 2 specific things: AI-selected drone strikes without human oversight, and bulk geolocation surveillance of American citizens. These are not exotic hypotheticals. Vanessa Vos at Bundeswehr University Munich confirmed the Pentagon sought to expand AI into kinetic operations, including target selection. The refusal was a floor, not a ceiling. Any AI company that cannot hold that floor has no business claiming it takes safety seriously.
What 'Lawful' Means When Nobody's Watching
OpenAI signed a deal permitting its models for "all lawful purposes." Sam Altman announced it the same day Anthropic was blacklisted. The phrase sounds reasonable. It is not.
Lawful is not static. Executive orders redefine it. Classification regimes hide it. The legal authorization for bulk metadata collection under Section 215 of the Patriot Act was considered lawful for over a decade before Edward Snowden revealed what it meant in practice. The AUMF passed in 2001 has been stretched to justify military operations in countries Congress never debated. "Lawful" is a word that expands in the dark, and classified Pentagon programs operate in permanent darkness.
Once an AI vendor signs an unrestricted contract and its engineers receive security clearances, the feedback loop closes. The company cannot publicly disclose what its models are being used for. It cannot refuse a specific application without breaching the contract. It cannot even tell its own board the details. The 37 OpenAI and Google DeepMind researchers who filed an amicus brief supporting Anthropic understand this: legal safeguards do not prevent mission creep inside classified programs, because mission creep is invisible by design.
The Incentive Structure After the Signature
Grant that Anthropic's refusal handed the contract to a less restrictive competitor. That is a real cost. But trace the incentive structure forward. OpenAI now depends on Pentagon revenue. Pentagon contracts come with renewal cycles, expansion opportunities, and political relationships that shape a company's priorities from the inside. OpenAI's business model will increasingly reward compliance with military demands, not resistance to them. Every quarter that revenue grows, the cost of saying no to any specific request rises.
Anthropic, locked out of defense work, retains the ability to set terms. That ability has a value the D.C. Circuit cannot quantify.
The argument that Anthropic should have stayed at the table and negotiated narrower restrictions assumes the Pentagon was negotiating. It was not. The demand was "unrestricted use for all lawful purposes." The timeline from demand to blacklisting was days. Treasury Secretary Scott Bessent praised Anthropic's Mythos model for cybersecurity against China in early April, weeks after the Pentagon declared the same company a national security risk. The government is not speaking with one voice. It is speaking with the voice that has the most coercive power at any given moment.
Who benefits from an "any lawful use" standard? The institution that defines what lawful means. Who pays the cost? The people whose data, movements, and lives fall inside that definition, with no mechanism to know it happened. The question is not whether AI belongs in defense. It does. The question is whether the terms of that deployment should be set entirely by the buyer, with no contractual constraint beyond the buyer's own interpretation of the law.
If the answer is yes, then the phrase "responsible AI" is marketing copy, and we should stop pretending otherwise.