A jury in Los Angeles spent 7 weeks hearing evidence that Meta and YouTube deliberately engineered their platforms to bypass adolescent self-regulation, then awarded $6 million in damages. Meta's market cap at the time: over $1 trillion. The fine is not a deterrent. It is a licensing fee.
The New Mexico verdict was larger, $375 million, and the legal theory behind both cases was genuinely significant: courts treated infinite scroll, autoplay, and variable-reward notification systems as design defects rather than protected speech, effectively routing around Section 230 immunity. Legal scholars will cite these cases for years. But the question worth asking is not whether the rulings were legally creative. It is whether anyone's feed looks different today than it did before the verdicts. The answer is no.
The Fine Is the Feature
Meta's internal research, surfaced by whistleblowers and entered into evidence, showed the company knew Instagram was worsening body dysmorphia in teenage girls and optimized for it anyway, because the emotional distress kept users scrolling. The Los Angeles jury heard this and found Meta 70% liable. Meta announced it would appeal. No compliance changes followed. The algorithm that the court found harmful is still running, still optimizing for the same engagement signals, still serving the same demographic.
This is what a business model looks like when it has correctly calculated that the cost of harm is lower than the revenue from causing it. The tobacco comparison that legal commentators keep reaching for is apt, but only up to a point: tobacco companies eventually lost the math war when liability exposure exceeded profit margins. Meta and Google are nowhere near that threshold. With 1,600 similar trials pending and punitive damages potentially reaching $30 million per case in California, the aggregate exposure starts to look meaningful, but only if courts keep ruling this way and only if appeals fail. Meta is betting they will not.
The Massachusetts Supreme Judicial Court heard oral arguments on April 10, 2026, in a related case. A decision is pending. Each new ruling either tightens or loosens the legal theory that makes all of this work. The companies know this, which is why the appeal strategy is not just delay: it is an attempt to kill the precedent before it compounds.
Who Absorbs the Cost While the Appeals Run
The honest tension in my own argument is this: the legal theory is working, slowly. Treating platform design as product liability rather than content moderation is the right frame, and these verdicts are building the case file that federal bellwether trials this summer will need. Progress is real, even if it is not visible in any user's experience yet.
But the children who were the subjects of Meta's internal studies are not waiting for appellate timelines. The 13-year-old whose self-harm exposure the algorithm optimized around in 2021 is 18 now. The legal system's pace and the developmental window it was supposed to protect are not synchronized, and that gap is not an accident of jurisprudence. It is a structural advantage that platforms with billion-dollar legal budgets have always known how to exploit.
Congress could close this gap by mandating algorithm audits with independent oversight, not as a condition of settlement but as a baseline operating requirement. The Federal Trade Commission has the authority to pursue this under unfair practices doctrine; it has simply chosen not to prioritize it. The rulings give the FTC political cover it did not have before. The question is whether anyone in a position to act will use it before the next round of internal research gets buried.
The verdicts proved the companies knew. The feeds proved nothing changed. Those two facts, sitting next to each other, are the whole argument.