A Meta employee wrote in internal documents that the company was acting like "drug pushers" and that Instagram is "addictive" while having a "negative impact on mental health." That document existed before the California jury verdict in early April 2026. Meta shipped the product anyway. That is not a content moderation failure. That is a product defect shipped with full knowledge of the defect.

The California jury found Meta and Google negligent for specific design choices: infinite scroll, autoplay, batched notifications timed to exploit dopamine cycles, beauty filters that Instagram's own engineers flagged internally. Teens spend roughly 20% of their day inside these apps. One-third of young users on these platforms show measurable increases in depression, anxiety, and sleep disruption. The New Mexico jury hit Meta with a $375 million verdict the day before California's ruling dropped. France's senate moved to ban social media for children under 15 within days of the U.S. verdict. The legal and legislative pressure is not theoretical anymore.

The Tobacco Comparison Actually Holds

I know the tobacco analogy gets overused, but the structure here is identical. Tobacco companies ran internal studies showing nicotine was addictive and harmful, suppressed the findings, and kept selling. Meta ran internal studies showing Instagram harmed teen girls' body image, suppressed the findings, and kept shipping. The mechanism is different: nicotine versus a recommendation algorithm optimizing for session length. The corporate behavior is the same.

Platforms will argue that teens violate terms of service by lying about their age, making the companies victims of user fraud. That argument deserves one sentence: TikTok has no meaningful age verification, and a January 2026 Hawaii lawsuit documents exactly that. Kayley, the plaintiff in the California case, was under 13 when she first used Instagram. Meta's own systems flagged underage users and kept serving them content anyway. Blaming the 12-year-old is not a defense.

The harder tension I have to acknowledge: the causal chain between algorithm design and specific mental health outcomes is genuinely difficult to prove at the individual level. Brain pathway changes from heavy social media use are documented, but isolating the algorithm's contribution from family environment, school stress, and pre-existing conditions is not clean science. Plaintiffs' attorneys are threading a needle, and some of these cases will fail on causation. That does not mean the legal theory is wrong. It means the evidence standards are high, as they should be.

Product Liability Is the Right Tool

Section 230 debates miss the point. Section 230 protects platforms from liability for user-generated content. It was never designed to shield product design decisions. Infinite scroll is not user content. Autoplay is not user content. A notification system engineered to batch likes and release them in bursts to maximize re-engagement is not user content. These are engineering choices, made by product managers and backend engineers, shipped through A/B tests, and optimized against engagement metrics that correlate with addiction patterns.

If you have shipped a product and later discovered it was harming users, you patch it or you pull it. You do not keep the harmful version running because it drives revenue. That is the standard we hold every other product category to, from pharmaceuticals to car manufacturers. Meta and Google should not get a different standard because their product runs on servers instead of assembly lines.

Congress should stop treating this as a content problem and let product liability law run. The California and New Mexico verdicts are the right signal. The companies will appeal, possibly to the Supreme Court. Whatever those courts decide will set the actual standard. The internal documents are already in the record. That evidence does not disappear on appeal.