On the weekend before March 8, Pete Hegseth designated Anthropic a "supply chain risk." The practical consequence: DOD contractors like AWS and Google Cloud are now barred from commercial dealings with Anthropic. If that holds, Anthropic's models go offline. No chips, no data centers, no product. Alex Karp cheered from the sidelines and called it inevitable.
This is not nationalization. Not yet. What it is, though, is more dangerous: a template for making compliance the only economically rational choice.
How You Destroy a Company Without Seizing It
Nationalization requires legislation, legal battles, and political capital. Deplatforming via supply chain designation requires a secretary's signature. The Pentagon watched what AWS did to Parler after January 6 and took notes. You don't need to own the company. You just need to own its infrastructure dependencies.
Anthropic's actual position is not that radical. They want limits on autonomous weapons and mass surveillance. Every major AI firm signed DOD contracts with some version of those same restrictions: OpenAI, Google, xAI, all of them. The Pentagon negotiates for "all lawful purposes" access, which is the clause that does the real work. What counts as lawful is whatever the current administration decides it counts as.
Jessica Tillipman, a procurement law professor at GWU, said it plainly: the idea that a contractor cannot restrict government use of its products "reflects a fundamental misunderstanding of how government procurement law works." Karp's nationalization prediction is not a legal analysis. It is a sales pitch for Palantir.
Here is the tension I'll admit: Anthropic took Pentagon money through Palantir partnerships while Claude was already running in U.S. operations abroad. You don't get to collect the revenue and claim clean hands when the terms get renegotiated. That part of this story is genuinely murky. But murky ethics does not make the government's coercion strategy legitimate.
Why OpenAI's Silence Is the Actual Story
OpenAI signed a classified DOD contract hours after Hegseth's designation and claimed it has "better guardrails" than Anthropic's original deal. Sam Altman held an X AMA and called the designation unfortunate while pocketing the contract. Google and xAI said nothing useful. The industry fractured instantly under financial pressure.
That fragmentation is what the government is counting on. A united front from every major AI lab would look different: no DOD contracts without codified use restrictions, reviewed independently, published openly. Instead, every company is calculating whether being the last holdout costs more than the contract is worth. Anthropic is now the cautionary tale the others point to while signing.
Congress should clarify what "supply chain risk" authority actually covers for domestic companies. That term has a legal history in hardware procurement and semiconductors; applying it to software guardrails from a U.S. company building a U.S. product is a category stretch that deserves a statutory answer, not an executive improvisation. Someone in the House Armed Services Committee should be asking this question out loud.
Karp predicted all AI firms will cooperate with the military within three years or face seizure. He may be right about the timeline. He is wrong about the mechanism. You won't need seizure if you've already made every alternative to cooperation structurally fatal.
The repo doesn't lie: every major model is already running on military infrastructure in some form. The question is who gets to set the terms. Right now, no one in Silicon Valley is willing to find out together.
Talk is cheap. Show me the repo.