Picture a diner counter in 1988. The evening news is on the mounted television, volume just loud enough to hear. Everyone in the place watches the same two minutes of a Senate floor fight, gets mildly annoyed, and goes back to their coffee. Nobody's blood pressure is still elevated at midnight. The broadcast is finished. The anger has nowhere else to go.

That mild annoyance is what social media replaced with something closer to a slow marinade. The mechanism is different from television at the most fundamental level: TV served the same dish to everyone; algorithms serve you a version of political content tuned to your specific resentments, then watch how long you stay. The longer you stay angry, the better the algorithm is working. That is not a bug in the design. It is the product.

When the Feed Learned Your Tells

Andie Galante, a Gen Z political science student whose observation keeps circulating in research circles, put it plainly: "The polarization comes from our algorithms being so closely tailored to us that all you're seeing is extremes. You're not seeing any middle ground anymore." That erasure of middle ground is not something TV was capable of producing. A broadcast reaches millions of people in the same frame. An algorithm reaches 1 person in 1 million different frames, each one optimized to provoke that specific person's disgust reflex.

The research from the 2024 US election period is useful here, even if it complicates the story. A six-month experiment with 9,386 users tested prosocial algorithm alternatives against platform defaults. Prosocial feeds raised generalized social trust by 1.7 points. Polarization itself? Unchanged. Meta ran its own experiments, flipping Facebook to chronological order, reducing reshares, downranking partisan sources. Also null on polarization. The honest reading is that swapping one feed design for another is not a cure. The anger predates the algorithm, and I will grant that point freely.

But "the algorithm didn't create all of this" is not the same as "the algorithm isn't making it worse." A shorter experiment on Twitter found that 10 days of downranking polarizing content reduced polarized attitudes measurably. The mechanism is real. Platforms are not powerless. They are choosing not to act because acting costs engagement, and engagement is the revenue model.

What TV Could Not Do That YouTube Can

Television had a structural ceiling on how far it could push you. A segment ends. A commercial interrupts. You change the channel. YouTube's autoplay function has no such ceiling. Researchers have documented what they call an alt-right pipeline: a user watches a moderate political video and autoplay escalates, recommendation by recommendation, toward more extreme content. The path has a direction, and the direction is always toward more intensity. No broadcast network ever had the architecture to do that to a single viewer over three hours on a Tuesday night.

On March 25, a report analyzing global digital discourse on potential Iran conflict noted that discussions peaked March 2 through 9 due to "rapid algorithmic escalation amplifying emotional narratives." That phrase is doing real work. The emotion did not build organically. The algorithm found the peak emotional content and pushed it to people most likely to react. Television journalists in 1988 could sensationalize, but they could not personally target you with the specific image most likely to enrage you specifically.

The fix is not asking platforms to redesign their souls. It is requiring them to offer chronological feeds as a default, to make autoplay opt-in rather than opt-out, and to publish the data on what their recommendation engines actually promote. These are structural choices, not philosophical ones. The diner counter in 1988 was not a perfect information environment. But the anger cooled when the broadcast ended. Right now, the broadcast never ends, and it knows exactly which nerve to hit.