On March 30, the CFP announced 3 new committee members: Bryan Maggard, Gus Malzahn, and Jeff Tedford. Rich Clark called it a commitment to "integrity and excellence." Hunter Yurachek got extended as chair. The press release was thorough, professional, and told you almost nothing about how 13 people will decide which 12 teams make the bracket in November.

That's the actual transparency problem, and it's more interesting than the conspiracy version. The committee isn't hiding its process. It publishes weekly top-25 rankings, lists film review and data analytics as inputs, and takes conference input. You can watch every step and still have no idea why Alabama at 9-2 with a bad loss ranks above a 10-1 Group of Five team with a better point differential. The criteria exist. The weighting doesn't.

The Model Nobody Will Show You

Here's what I'd want from a selection committee operating in 2026: a published formula, or at least a published hierarchy. Which matters more, strength of schedule or margin of victory? If a team's SP+ rating, which measures efficiency on both sides of the ball adjusted for opponent quality, contradicts its record, which one wins? The committee says it uses analytics. It does not say how much.

This is where I have to be honest about my own bias. I want a quantifiable process because I'm the guy who trusts sample size over moments, and a 13-person committee watching film is basically a 13-person committee trusting moments. Film evaluation is real expertise. I'm not dismissing it. But film without a weighting system is just organized intuition, and organized intuition has a documented history of favoring brand-name programs.

The 2026 committee now has 4 former head coaches out of 13 members. Malzahn coached Power conference programs. Tedford coached Cal and Fresno State. Maggard ran Louisiana's athletic department through 20 Sun Belt titles. That's a reasonable mix on paper. The problem is that "reasonable mix" is also not a methodology. It's a vibe check with credentials.

What Transparency Actually Requires

The fairest counterpoint is that college football's complexity genuinely resists clean formulas. Conferences play different schedules, strength of schedule varies wildly, and a 12-team bracket already absorbs more uncertainty than the old 4-team format did. The committee has a harder job than it looks, and weekly public rankings are more accountability than most selection bodies offer.

Fine. But the NFL uses win probability models that are publicly documented. FiveThirtyEight published its Elo methodology in full. Even the NCAA basketball tournament's NET ranking, which replaced the RPI in 2019, comes with a published formula explaining how it weights road wins versus home wins. The CFP is the only major American sports selection process that says "we use analytics" without telling you which ones or how much they count.

The fix isn't complicated. Publish the weighting. Tell us whether a team's SP+ ranking, its record against top-25 opponents, or its conference championship matters most when the committee breaks a tie. You don't have to eliminate human judgment; just show where it enters the process. Right now, the committee is transparent about its inputs and completely opaque about its math, which means every controversial ranking decision gets defended with "we watched the film" and every fan base arguing the other side has no way to prove them wrong.

That's not a hidden process. It's an unverifiable one. And for a 13-team bracket deciding who plays for a national title, unverifiable is not good enough.