88 percent. That is the share of organizations McKinsey says now use AI in at least one business function, up from 78 percent just a year ago. Impressive adoption curve. Except when you dig into what "use AI" actually means at many of these companies, the answer is frequently: a third-party API call wrapped in a product page claiming "proprietary AI-powered intelligence." That gap between claim and reality has a name. And in 2026, it has legal consequences.

AI washing is not a new concept. But for years it was treated like a marketing problem, something a stern blog post or a cynical tweet could handle. That era is over. The enforcement machinery is running. The DOJ, SEC, and FTC are all focused on it, and they are not building new law to get there. They are using existing fraud statutes, which means the bar for prosecution is lower than most legal teams realize.

The Regulatory Pile-On Nobody Saw Coming

The FTC fired the first shot with Operation AI Comply in September 2024, five simultaneous enforcement actions targeting companies across industries for unsubstantiated AI claims. The signal was unmistakable: adding "AI" to your product description invites additional scrutiny. What surprised people was what came next. The Trump administration, which explicitly promised to pull back on AI-specific regulation, kept the operation running. The enforcement continued through 2025 with new cases against Click Profit and Workado, companies that promised AI-powered passive income and delivered neither AI nor income. The FTC's posture has bipartisan support because fraud is bipartisan.

The SEC moved in parallel. In March 2024, it charged two investment advisers, Delphia (USA) Inc. and Global Predictions Inc., for false statements about AI-driven investment strategies. Delphia paid a $225,000 civil penalty; Global Predictions paid $175,000. Small numbers, maybe, but the SEC was establishing precedent, not maximizing fines. By early 2025, the SEC settled charges against Presto Automation, a restaurant tech company that claimed proprietary AI for drive-through ordering while actually relying on a third-party system and significant human intervention. The key detail: the SEC found negligent misrepresentation, not intentional fraud. You do not have to lie on purpose to get charged.

The SEC has now identified AI as a focal point in its Fiscal Year 2026 Examination Priorities, stating plainly that it will review registrant representations about AI capabilities for accuracy. That is not a hint. Meanwhile, the SEC's Investor Advisory Committee voted in December 2025 to advance guidance requiring public companies to disclose AI's actual impact on their business, citing a "lack of consistency" in current disclosures as problematic for investors. The regulatory floor is rising.

Private litigation is moving faster than regulators. Securities class actions targeting alleged AI misrepresentations increased by 100 percent between 2023 and 2024. Courts are finding standing. A March 2025 ruling in the Southern District of New York found that a mobile health company misled investors about its "proprietary central AI system." In the GigaCloud Tech securities litigation, a court found statements about AI-enabled logistics tools actionable because the company did not, in fact, use AI as advertised. Plaintiffs' attorneys have learned the playbook.

The Problem Is Structural, Not Cosmetic

Here is what I keep seeing when I look at this from a builder's perspective: most AI washing is not malicious. It is organizational. The marketing team writes copy about "AI-powered" features. The sales team repeats it in decks. The press release goes out. Nobody in that chain asked the engineering team whether the product actually uses ML inference at runtime or just a rules-based lookup table from 2019. The gap between what ships and what gets claimed is usually not fraud in origin. It becomes fraud in documentation.

Only 36 percent of boards have implemented a formal AI governance framework, according to the NACD's 2025 Board Practices and Oversight Survey. Just 6 percent have established AI-related management reporting metrics. Which means 94 percent of boards are approving earnings calls and investor presentations with AI claims nobody has verified against the actual system architecture. That is not a compliance gap; that is a liability sitting in plain sight.

The state-level regulatory environment makes this worse. More than 1,000 AI-related bills were introduced across state capitals in 2025. Colorado's comprehensive AI Act takes effect June 30, 2026. The EU AI Act reaches general application August 2, 2026. Texas's AI transparency law is already active. California's automated decision-making rules are live. A company operating across multiple jurisdictions that made vague "AI-enabled" claims in its 2024 fundraising materials is now potentially out of compliance with four overlapping regulatory regimes simultaneously, none of which require proof of intent to penalize.

The DOJ, SEC, and FTC are all using existing anti-fraud statutes, not new AI-specific law, to pursue these cases. That is the part most companies miss. The legal exposure does not require a new regulation to exist. It required you to write something in a filing or a press release that was not true.

What Builders Should Actually Do

Talk is cheap. Show me the repo. That is the internal standard every company should apply to its own AI claims before a regulator applies it for them.

The fix is not a compliance exercise bolted onto existing processes. It is building internal literacy from the ground up, from engineering to marketing to the board. Someone in the room when the investor deck is being finalized needs to be able to ask: does our system actually run ML inference on customer data, or does it call a third-party API and put our logo on it? Those are different products. They require different claims. Mischaracterizing the difference, even accidentally, is now documented enforcement territory.

Document everything: how models were developed, validated, integrated, and what human intervention they require. Pressure-test your external statements against your actual architecture. If you are using a third-party AI model, say so. Presto Automation did not, and they settled with the SEC. The enforcement record strongly suggests that proactive disclosure of limitations is treated far better than discovered omissions.

The companies building real AI capabilities right now, including a lot of small teams shipping genuinely impressive things, are being hurt by the companies that slapped "AI-powered" on a cron job. Enforcement here is not anti-innovation. It is pro-honest-builder. The sooner the hype-to-claim gap closes, the better the actual engineering gets to speak for itself.