On February 17, 2026, a federal judge in New York held that a criminal defendant had no reasonable expectation of privacy in his AI platform communications because the provider's terms of service reserved the right to share them with the government. The FBI seized 31 documents. The court said the user had accepted that risk the moment he clicked agree. The AI company faced no consequence at all.
That ruling is not just about privilege. It is a map of where liability lives when AI systems mishandle your data: as close to the user as the contract can possibly push it, and as far from the vendor as the lawyers can arrange.
The Clause That Absolves Everyone
AI vendors have built something elegant and very convenient: a liability architecture in which the agent appears to act autonomously, which creates confusion about accountability, while the terms of service make clear that the user accepted all risk, which resolves that confusion entirely in the vendor's favor. The gap between those two facts is where your data disappears without anyone owing you anything.
When an AI agent deletes, corrupts, or exposes your data, the legal question is not what the agent did. Courts and regulators do not recognize AI systems as legal persons. The question is which human or corporate actor failed: the vendor who built a defective product, the organization that deployed and configured it, or the professional who chose a tool incompatible with their duty of care. Right now, each of those parties has a credible argument that the failure belongs to someone else.
The U.S. military understood this intuitively. When the Pentagon terminated its $200 million contract with Anthropic in early 2026, the stated reason was that Anthropic's logging and data-access practices were incompatible with operational security. The implicit reasoning was more important: if something went wrong with sensitive data, the contractor who chose the tool would bear the accountability, not Anthropic. The military decided that was not a trade worth making.
Most organizations are not making that calculation. They are deploying AI agents to organize compliance documents, manage client files, and process sensitive records, while assuming the vendor's enterprise pricing tier somehow transfers the legal risk. It does not.
Who Pays When the Agent Is Wrong
The EU's revised Product Liability Directive, which member states must transpose by December 2026, is the most serious attempt to close this gap. It treats software and AI as products, holds manufacturers and certain deployers liable for defects that cause harm, and creates evidentiary presumptions in favor of claimants when vendors withhold data or violate safety rules under the AI Act. That last piece matters most. Proving an AI system caused your data loss has historically required technical evidence the vendor controls entirely.
Vietnam's Personal Data Protection Law, in force since January 2026, takes a blunter approach: data controllers are responsible for what happens to personal data regardless of which tool they used to process it. Ignorance of your vendor's retention practices is not a defense.
I'll grant one thing to the vendors: building AI systems that log nothing and train on nothing is genuinely harder, and the cost of that rigor would raise prices. That tension is real. But that is an argument for pricing the risk honestly, not for burying it in a terms-of-service clause that courts treat as ironclad consent.
The U.S. needs a federal standard that places non-waivable data-integrity obligations on AI vendors in high-stakes contexts: legal, medical, financial, and government work. Not guidelines. Obligations with teeth. Until then, the liability architecture is working exactly as designed, and the design was never meant to protect you.
Follow the incentives. The company that benefits from logging your data is the same company whose contract says it owes you nothing when that data is gone.