The 2026 study in Weather and Climate Dynamics found that the 2018 Central European heatwave would have been 0.5°C less intense and 6 days shorter without human-caused warming. It projected 1.7°C of intensification per degree of global warming. Those are strong, specific numbers produced by a rigorous hybrid framework. I believe them. What I do not believe is that this level of confidence extends to most of the events making headlines.

Attribution science has a precision problem it refuses to talk about honestly. The field's best results cluster around 1 type of event: heat. Its weakest results cover nearly everything else people actually want explained.

The 34-Point Gap Nobody Advertises

Look at the shared evidence base. For extreme heat, 92% of 122 studies found a detectable climate signal. For rainfall, that number drops to 58% across 81 studies. That is a 34-percentage-point gap. In any engineering discipline I cover, a tool that works 92% of the time on 1 input and 58% on another would come with a very clear label specifying which input it is validated for.

Attribution science rarely comes with that label.

When World Weather Attribution releases a rapid study within days of a flood or a cyclone, the public does not parse the difference between a heat attribution with high confidence and a precipitation attribution where models struggle with convective-scale dynamics and grid resolution. The press release lands the same way. The framing is the same. The caveats, if they appear, sit in paragraph 9.

This matters because of how the numbers get used. A 68-country study of nearly 72,000 people found that personal attribution of weather to climate change predicted stronger policy support, but direct exposure to storms and rainfall did not independently move opinion. Wildfires were the lone exception. If the science itself is shaky on precipitation and storms, and public opinion does not shift from experiencing those events anyway, then overstating attribution confidence for those categories serves no one. It just erodes trust in the categories where the science is solid.

Peer Review Is Not a Formality

World Weather Attribution has conducted over 100 rapid studies. Only 26 have gone through peer review. I understand the argument for speed: infrastructure decisions cannot wait 18 months. But speed without quality control is how fields lose credibility. The 26 peer-reviewed studies may well confirm the methods. The other 74-plus have not been tested the same way, and treating them as equivalent is a choice, not a scientific standard.

The 2026 hybrid framework is genuinely impressive. Spectrally nudged storylines isolating thermodynamic effects from circulation patterns represent real methodological progress. I want to be clear: the people doing this work are producing some of the most useful climate science available. The problem is not the researchers. The problem is the gap between what the best studies establish and what the field's public communications imply.

Crash Davis is right that 92% consistency across heat studies is a legitimate engineering-grade signal. I would use those numbers to plan grid resilience or size urban cooling infrastructure without hesitation. But extending that confidence to precipitation, windstorms, or compound events where the signal-to-noise ratio collapses is not engineering. It is marketing.

The National Academies initiated a project in 2023 to update its 2016 attribution report, with a stated priority on extending methods to data-limited regions. Good. But "extending methods" is an admission that the methods are not yet extended. The field should say so plainly, every time, in the same font size as the headline finding.

Attribution science should draw a hard, public line: heat events above it, most other categories below it, with explicit confidence tiers attached to every rapid study. That line would make the 92% number more powerful, not less. Right now, bundling strong results with weak ones makes the whole enterprise look like advocacy dressed as measurement. The thermodynamic signal is real. Pretending the rest of the portfolio matches it is the fastest way to get the real results ignored.