Somewhere in a federal review queue right now, a company has submitted a read-across assessment, the method where safety data from one chemical informs judgments about a structurally similar one. The odds that assessment gets accepted are poor. The vast majority of industry read-across proposals fail regulatory review, not because the underlying science is fraudulent, but because the evidence of biological similarity between chemicals rarely meets the bar regulators require. That single bottleneck is costing animal lives, slowing chemical approvals, and leaving real safety questions unanswered.
The standard picture of chemical safety testing goes: expose rodents to escalating doses, watch what breaks, extrapolate to humans with a safety factor. It works well enough to have built a 70-year regulatory system on. The problem is that mice and rats metabolize compounds differently than humans do. FDA's own March 2026 draft guidance acknowledges that validated New Approach Methodologies, which use human cell lines, organoids, and computational models, now outperform unvalidated animal models in predicting human drug responses. That is not a fringe position from animal rights advocates. That is the agency's stated technical finding.
What $150 Million Actually Buys
On March 18, NIH announced $150 million in Complement-ARIE program awards, including a 5-year, $15.3 million grant to Texas A&M's new NAMs Decisions Center. The center's mandate is specific: fill the data gaps that cause read-across proposals to fail, combining cell-based systems with computational modeling to build the evidence base regulators need before they will accept non-animal assessments.
Ivan Rusyn, who directs the center, put it plainly: the goal is not to generate more science papers. It is to change what regulators will sign off on.
That framing matters. The gap in chemical safety is not primarily a knowledge gap at this point. Scientists understand quite a lot about how organoids and organs-on-chips respond to toxic exposures. The gap is between what the science can show and what a regulatory submission needs to prove. Those are different problems, and funding research centers solves only one of them.
The Clock the EPA Set
EPA committed in 2025 to eliminating mammal tests on dogs and rabbits by 2035. That is 9 years away. The FDA roadmap from April 2025 similarly treats animal tests as something to move away from, not something to eliminate on a fixed schedule. The distance between those two positions, EPA's deadline and FDA's directional preference, captures exactly how unresolved this is.
Fair point to the cautious side: rushing validation of new testing methods creates its own risk. A computational model that underestimates liver toxicity does not help anyone. The 3Rs framework, which pushes for replacing, reducing, and refining animal use, has delivered real results in countries that enforce it seriously. Validation standards exist for good reasons.
But the current system is not cautious. It is slow in ways that cause harm in both directions: chemicals with thin safety data stay on the market because full testing is expensive and slow, and genuinely useful compounds face years of animal trials that human-based models could resolve faster and more accurately. Over 100 organizations told Congress in 2026 that TSCA chemical reviews need to move faster. The science is ready to help. The regulatory acceptance criteria are not keeping pace.
The $150 million investment is real. What needs to follow it is a specific, published revision to the evidence standards for read-across acceptance, with a timeline attached. Funding a center to study the bottleneck is a start. Announcing you have funded a center is not progress. Published, revised acceptance criteria, with a date, would be.