A deep learning tool called PATCH can assign a 1-cm² patch of oil paint to the correct artist with 95% accuracy. That number is genuinely impressive. It is also completely irrelevant to whether the painting is any good. The confusion between those two things is where the real problem lives in 2026.

The question of whether algorithms decide what counts as good art has a clean answer: no. The messier, more important question is whether they decide what gets seen, funded, and therefore valued. That answer is yes, and the consequences are already showing up in the numbers.

Exposure Is Not a Neutral Variable

Streaming platforms commission films based on what their recommendation engines predict will retain subscribers. That is not a creative judgment; it is a retention optimization. When a platform buries an experimental film because its algorithm predicts low completion rates, the film does not stop being good. It stops generating revenue. Over time, those two things get treated as the same thing by everyone downstream: distributors, critics, grant committees, art schools.

A study published this month found that watching experimental films increases creative thinking, while YouTube's algorithmic feed trains viewers toward patterned consumption that runs counter to creative cognition. The researchers framed this carefully: the feed is not neutral. It is actively shaping what audiences can tolerate, which shapes what creators can sell, which shapes what gets made. That feedback loop is where algorithmic influence becomes structural rather than incidental.

The fair point to grant here: authentication tools like PATCH genuinely help. Provenance fraud costs the art market hundreds of millions annually, and a system that identifies same-artist pairs with an F1 score of 0.991 is a real service to buyers and historians. Precision attribution is not the same as aesthetic judgment, and conflating them is the critics' error, not the algorithm's.

The Counter-Signal Is Already Priced In

The market is not waiting for a cultural reckoning. Graphic and web design trends for 2026 are already shifting toward hand-drawn illustrations as a direct response to AI saturation. Buyers are paying a premium for detectable human labor. That is not sentiment; it is price differentiation. When AI can generate a competent abstract in seconds, the scarcity value moves to the thing AI cannot fake: evidence of struggle, revision, and time.

Kevin Kelly's claim that AI can now make better art than most humans is worth taking seriously, not because it is correct, but because it reveals the category error driving this whole debate. "Better" by what measure? Technically proficient? Probably. Capable of the kind of ambiguity and difficulty that makes art worth returning to? The experimental film study suggests the opposite. Algorithmic outputs train consumption; they do not expand it.

The practical implication is specific. Institutions that use algorithmic metrics, completion rates, engagement scores, attribution confidence, to make funding and acquisition decisions are not discovering quality. They are laundering a distribution metric as an aesthetic one. Museum acquisition committees, streaming commissioning editors, and grant panels should be required to separate reach data from quality criteria in their evaluation rubrics. Not because algorithms are evil, but because conflating the two is a category error with real financial consequences for working artists.

PATCH can tell you Rembrandt painted it. It cannot tell you why that matters. The people writing the checks need to hold that distinction, because right now, most of them are not.