A single Anthropic announcement on February 2 erased $285 billion in tech market value within 24 hours. The companies that lost that value were not Anthropic. They were the SaaS vendors whose customers suddenly looked like they might leave. The businesses that had built workflows on top of those SaaS vendors were not in the headline at all. They rarely are, until the tool they depend on disappears.
This is the structure of AI dependency risk, and most businesses are treating it as a footnote. You adopt a tool, you restructure around it, you let go of the people who used to do what the tool now does, and then you discover that the tool's continued existence was never your decision to make.
The Knowledge You Fired Doesn't Come Back
The NBER survey of 750 CFOs, published March 24, projects 502,000 AI-driven job cuts in 2026, up from 55,000 in 2025. That is not a workforce optimization. That is a bet, placed by executives who do not control the odds. Jason Averbook put it plainly: when you eliminate people before the tools are ready, you lose institutional knowledge, and if the productivity gains don't materialize, you cannot easily rehire what you let go.
The same survey found that AI tools are currently increasing task time by up to 346% for some workers. Companies are cutting staff to fund tools that are, in measurable cases, making their remaining employees slower. The incentive to announce AI adoption is not the same as the incentive to make AI adoption work, and the people absorbing the cost of that gap are the ones who got laid off first.
Aon's analysis of third-party AI risk is worth reading carefully: misconfigurations and architectural weaknesses across fast-scaling AI platforms expose organizations to outages, data leakage, and loss of service integrity. The concentration risk here is worse than traditional SaaS lock-in, because at least legacy SaaS vendors had stable business models. AI platforms are still discovering theirs.
Dependency Without Governance Is Just Exposure
Cybersecurity analysts warned on March 25 that shifting budgets toward AI services risks deteriorating core defenses during outages. That is the second-order effect most deployment decisions ignore: the thing you stopped funding to pay for the AI tool is exactly what you need when the AI tool fails.
The EU AI Act's full compliance requirements for high-risk systems activate on August 2, 2026. Hiring models, credit models, systems that make consequential decisions about people: all of them will require documented governance. Most companies that adopted these tools quickly have no such documentation. The regulatory deadline is not the risk; the absence of internal accountability that the deadline is exposing is the risk.
I'll grant the optimists one point: Jensen Huang is probably right that AI agents will use software rather than replace it entirely, and the SaaSpocalypse panic was overblown. But that argument addresses market valuation, not operational fragility. A business whose core workflow runs through a single AI vendor is exposed regardless of whether that vendor's stock recovers.
The question every operator should be asking is not whether their AI tool works today. It is who made the decision to depend on it, what the exit plan is if it changes its pricing model or shuts down a feature or gets acquired, and whether the people who could answer those questions still work there. Most companies cannot answer all 3. That is not a technology problem. That is a governance problem that technology made easier to ignore.