Meta's internal researchers wrote it plainly: "Given the disproportionate engagement, our algorithms presume that users like that content and want more of it." That sentence, surfaced this week in the BBC documentary Inside the Rage Machine, is not a revelation. It is a confession with a timestamp.

The mechanism is simple. Angry users scroll longer. Longer sessions mean more ad inventory. More ad inventory means higher quarterly revenue. Meta has over 3 billion users on its platforms. A 2 to 3 percent revenue gain from loosened content guardrails, the number one anonymous engineer cited when explaining why Meta chased TikTok, translates to hundreds of millions of dollars annually. The anger is not a bug the company is working to fix. It is a margin line item.

When the Product Outpaces the Safety Team

Instagram Reels launched in 2020. Meta's own senior researcher, Matt Motyl, confirmed it launched without sufficient safeguards. The internal data is specific: Reels showed 75 percent higher rates of bullying and harassment content than the main feed, 19 percent higher rates of hate speech, and 7 percent higher rates of violence and incitement. Meta still launched on schedule. The revenue case for Reels against TikTok outweighed the safety case for waiting.

TikTok's former machine-learning engineer Ruofan Ding described the recommendation system as a "black box" that his team adjusted almost weekly to boost engagement, with predictable results: more borderline content, more user anger, more sessions. He was asked whether the algorithm could be designed to be inherently safe. His answer: "We lack control over the deep learning algorithm." That is a remarkable thing for an engineer to admit about a system his company ships to a billion users.

Calum, now 19, told the BBC he was radicalised by algorithm starting at age 14. His description of the experience is precise: "The videos energised me, but not really in a good way. They just made me very kind of angry." He is one person. Meta takes action on over 6 million Reels monthly for policy violations, which means the volume of harmful content reaching users before enforcement is orders of magnitude larger.

The Fairness Argument Does Not Hold

Platforms will argue, reasonably, that their moderation teams remove millions of pieces of harmful content every month and that the systems are improving. That is true. But it is also true that Meta's own internal document, titled "Does Facebook reward outrage?", found that content generating more negative comments is more likely to attract traffic. You cannot moderate your way out of an incentive structure that rewards the behavior you are moderating against.

UK counter-terror police reported this month a "normalisation" of antisemitic, racist, and far-right posts in recent months. That normalisation did not happen by accident. It happened because the systems were optimized for it, knowingly, with internal research confirming the tradeoff.

The ongoing test case in Los Angeles, where Meta and YouTube face allegations of creating addictive products that harmed a 20-year-old's mental health, will not fix this on its own. TikTok and Snapchat settled before it reached trial. Settlements do not change product design.

What would actually change it: regulatory liability that treats algorithmic amplification of harmful content the same way product liability treats a defective car. If Meta bore financial consequences proportional to the harm its recommendation engine causes, the internal calculus on that 2 to 3 percent quarterly revenue gain would shift immediately. Congress has declined to act for a decade. The BBC documentary gives them the receipts. The question now is whether they use them.