Nearly every organization in the country, 97%, now depends on open source AI models. And 17% of the components inside those codebases cannot be tracked by standard audits. Hold those 2 numbers in your head at the same time. The first is cited as proof the system works. The second is proof the system has already escaped the people nominally running it. When adoption outpaces auditability by this margin, you do not have an innovation engine. You have a supply chain nobody can see the bottom of.

The technology itself is genuinely impressive. Alibaba's Qwen3.5-9B beating a model 13 times its size on benchmarks. Ai2's Olmo Hybrid achieving 2x data efficiency. Mistral's 119-billion-parameter multimodal release running on consumer infrastructure. I am not questioning whether open source AI produces good models. I am questioning who bears the cost when those models, embedded in 87% of codebases that already contain at least 1 vulnerability, produce a systemic failure that nobody saw coming because nobody could see.

The Governance Gap Has a Number

Consider the 76% figure from the 2026 OSSRA report: at companies that formally banned AI coding assistants, 3 out of 4 employees use them anyway. This is not a minor compliance issue. It means the organizations deploying open source AI cannot enforce their own internal safety rules, let alone external ones that do not yet exist. When your workforce is covertly generating code with tools that inject untracked dependencies, your security posture is a fiction you maintain for the board deck.

The vulnerability count doubled year over year to 581 per codebase. The instinct among open source advocates is to call this a scaling problem, solvable with better tooling. Fine. Where is the tooling? Hugging Face now hosts 2 million public models. Robotics datasets exploded 23x in 2 years. NVIDIA ships single-command agent deployment runtimes. The acceleration is relentless. The governance infrastructure is voluntary, underfunded, and fragmented across jurisdictions that cannot agree on what a software bill of materials should contain.

Yes, open code is more auditable than closed code in theory. That is a fair structural advantage. But auditability only matters if someone is actually auditing, and at this velocity, the ratio of new model releases to funded security reviews is not close to sustainable. The 62% of organizations stuck in pilot phase are not evidence of organizational paralysis. They may be the only honest actors, the ones who looked at the governance gap and decided they were not ready.

Who Pays When the Supply Chain Breaks

Follow the incentives. NVIDIA, Alibaba, and Mistral release open weights because distribution builds their ecosystems: hardware sales, cloud compute, API upsells. The cost of a downstream breach does not appear on their balance sheets. It appears on the balance sheet of the hospital, the insurer, the municipality that embedded an unaudited model into a workflow 3 layers deep. The beneficiaries of open release and the bearers of open risk are different entities, separated by enough abstraction that accountability dissolves.

Agentic AI sharpens the problem. OpenDevin's 78,000 monthly users are running autonomous agents that execute code and access databases. AutoGPT completes 81% of assigned tasks. These are real capabilities. They are also real attack surfaces, deployed by developers whose organizations cannot track 17% of the components already in production.

What should happen is specific and achievable. Mandatory software bills of materials for any AI system deployed in critical infrastructure. Federally funded third-party security audits for the top 500 most-downloaded open source AI models on Hugging Face. Liability frameworks that connect the companies profiting from open release to the downstream consequences of unaudited deployment. Not restrictions on publishing weights. Restrictions on deploying them blind.

The 97% adoption number is supposed to be the triumph. It is actually the measure of exposure. And exposure without visibility is just risk with better marketing.