A New Mexico jury found Meta liable for harming children's mental health. One day later, a Los Angeles jury awarded $6 million against Meta and YouTube for negligent platform design, with California's punitive multiplier potentially pushing that single case to $30 million. The verdicts share one finding that matters more than the dollar amounts: the design features themselves are the defective product. Not the content. The architecture.

This is the question the industry has spent two decades deflecting. Engineers at Meta ran A/B tests to find the most engaging notification cadence. They tested which variable reward intervals kept teenagers returning most compulsively. Meta's own researchers told management that Instagram worsened eating disorders in 1 out of 3 young girls, and management responded by accelerating feature rollout. The juries read those internal documents. So the question of whether courts should now mandate redesign is really a question about what happens when the people who built the harm are also the ones proposing the fix.

The Incentive That Built the Product

Infinite scroll, autoplay, compulsive notifications, algorithmic amplification of anger: none of these are accidents of engineering. They are outputs of a business model that monetizes attention and measures success in daily active minutes. Devon Reyes would argue that engineers understand the technical constraints of redesign better than any judge, and he has a point. Courts are genuinely poor at specifying implementation details. But the argument that engineers should retain design authority because they understand the system is precisely the argument that produced the system. The people who optimized for engagement over adolescent neurological safety do not get to self-certify the repair.

The product liability framework the juries applied is the right one. When a toy manufacturer builds sharp edges into a product marketed to children, we do not ask the manufacturer to voluntarily round the corners. We remove the product and hold the manufacturer to an external standard. The same logic applies here, and the courts are now saying so explicitly. What the verdicts establish is not that judges will write code. They establish that specific features, infinite scroll, autoplay, variable reward systems, can be classified as defective, which means engineers must redesign them to a standard they do not set themselves.

Who Pays When the Remedy Is Too Weak

The tension I keep returning to: courts move slowly, and platforms iterate fast. A court order banning infinite scroll in 2026 may be technically obsolete by 2028, when the same neurological manipulation ships under a different product name. Regulatory bodies with ongoing technical authority, not one-time verdicts, are probably the more durable mechanism. The EU's concurrent rollout of age-verification infrastructure suggests that some governments understand this. The United States does not yet have an equivalent.

What the March verdicts actually accomplish is establishing that the harm is real, the knowledge was premeditated, and the defense of parental responsibility is legally insufficient. That foundation matters enormously. It means the next legislative push for mandatory design standards arrives with jury findings behind it rather than advocacy documents. The $30 million ceiling on punitive damages in a single California case is still a rounding error against Meta's quarterly revenue. The precedent is not.

The engineers did not fail to see the harm. They saw it, documented it, and shipped anyway. Courts did not create that problem. They are the first institution with enough authority to make it expensive enough to stop.