Seventy parents walked the halls of Capitol Hill on April 23 and 24, citing two jury verdicts as proof that something had finally changed. They were right about the law. They were wrong about the platforms.
The March 2026 rulings against Meta and YouTube are genuinely significant as legal architecture. Los Angeles jurors found both companies liable for negligent design, not for what users posted but for how the product was built to keep them there. A New Mexico jury ordered Meta to pay $375 million for predator exposure and deceptive safety claims, threading the needle around Section 230 by framing the conduct as a consumer protection violation. For 30 years, that wall stopped almost every accountability attempt cold. These cases found a door.
But a door is not a destination. The Los Angeles verdict awarded $6 million total, split between 2 companies. One legal analyst called it, accurately, "a rounding error" for Meta. Neither company has announced a single design change. The expert prediction that users are "likely to see" off-by-default settings for minors is prospective, not present. As of late April, the product your teenager uses today is the same product it was in February.
The Incentive Structure Didn't Move
Here is the question worth sitting with: what would it actually cost Meta to redesign its recommendation engine away from compulsive engagement? Not in engineering hours. In revenue. Engagement-maximizing algorithms are not a bug in Meta's business model; they are the mechanism by which the business model functions. Advertisers pay for attention. Attention is manufactured by the design choices these juries just called negligent. A $6 million penalty against that system is not a deterrent. It is a licensing fee, and a cheap one.
Meta and Google have both vowed to appeal. That is the correct move from a pure incentive standpoint: delay the precedent, contest the damages, and continue operating while the legal process runs its course over the next several years. The companies are not being irrational. They are doing exactly what their revenue structure rewards them for doing.
I'll grant the optimists one real point: the New Mexico playbook, framing design as a consumer protection violation rather than a content question, gives state attorneys general a replicable legal theory that does not require federal action. That matters. But replicable legal theories still take years to litigate, and the platforms know how to run out the clock.
What Would Actually Move the Needle
Roblox now faces 130 federal suits on similar design mechanics. AI chatbots, which generate their own outputs rather than amplifying user content, present an even cleaner product liability theory. The legal pressure is building across the industry, not just on Meta and YouTube. That accumulation could eventually produce damage awards large enough to register as a genuine business risk rather than a litigation cost of doing business.
But "eventually" is doing a lot of work in that sentence, and the children using these platforms right now are not in an eventually situation. Congress has the authority to mandate specific architectural requirements: default time limits, algorithmic transparency, prohibition on engagement-maximizing design for users under 18. The parents on Capitol Hill this week are asking for exactly that. The question is whether legislators treat the verdicts as permission to act or as a reason to wait and see how the appeals resolve.
Waiting is also a choice. It just benefits different people than the ones who made the trip to Washington.