Google DeepMind's GNoME tool has predicted 2.2 million new crystal structures. Of those, 380,000 are considered stable enough to be candidates for experimental synthesis. The previous total of known stable materials, accumulated across all of human chemistry history, was around 48,000. That is not a minor improvement. That is roughly a tenfold expansion in the known map of possible materials, achieved by a deep learning model running on GPU clusters.
So. Are we done? Not remotely. Even after new materials are discovered, it usually takes decades for industries to take them to commercial applications. The optimistic case, according to DeepMind's own materials discovery lead, is five years. Meanwhile, external researchers have independently created only 736 of GNoME's predicted materials in the lab. Seven hundred thirty-six out of 380,000 candidates. That is the gap this column is about.
The engineering says: AI has genuinely entered the scientific process. It is accelerating real discovery. The hype says we are approaching autonomous science. Both of those things are true and misleading at the same time, and conflating them costs us time we do not have.
What the Numbers Actually Look Like
In 2026, AI is no longer just summarizing papers and writing reports. It is actively joining the process of discovery in physics, chemistry, and biology, generating hypotheses, controlling scientific experiments, and collaborating with human researchers. That framing, from Microsoft Research president Peter Lee, is accurate. It is also the version of events that gets press releases written.
The more grounded version: a year ago, researchers would have said there was a lot of hype. Now, there are actually real discoveries. Computational neuroscientist Sebastian Musslick put it plainly. That shift matters. The 2024 Nobel prizes in both chemistry and physics went to researchers who built AI tools. In math, DeepMind used an advanced version of its Gemini model to earn a gold medal at the International Mathematical Olympiad, a feat forecasters in 2021 predicted would remain out of reach until 2043. These are not hallucinations. They are documented results.
In drug discovery, the numbers are similarly concrete. Automated AI-driven labs are conducting about 800 chemical reactions per day, equivalent to the work done by roughly 150 to 200 chemists in a single day. Novartis used generative AI to computationally design 15 million potential compounds, then worked with only around 60 in the lab, arriving at a potent molecular scaffold now moving forward for further optimization. The leverage ratio there is remarkable: 15 million in silico, 60 in the physical lab. That is what acceleration looks like when it actually works.
Self-driving laboratories are also producing measurable gains, not just concept papers. Researchers have demonstrated a technique that allows self-driving laboratories to collect at least 10 times more data than previous techniques at record speed, published in Nature Chemical Engineering. A collaboration between Argonne National Laboratory and the University of Chicago found that an AI advisor model applied to electronic polymer discovery achieved a 150% improvement in mixed conducting performance and identified key structural factors. Traditional materials development pipelines run 10 to 20 years. Self-driving laboratories and Materials Acceleration Platforms aim to reduce this to 1 to 2 years through closed-loop systems combining physical experimentation with computational intelligence. That is the target. Not achieved at scale yet. But the direction is real.
Where the Skeptics Are Right
None of this is magic, and the skeptics who say so are doing the field a service. Researchers at UC Santa Barbara argued that GNoME's predictions are solely of crystalline inorganic compounds and should be described as such, not labeled with the more generic term "material." They note that GNoME did not cover polymers, glasses, metal-organic frameworks, heterostructures, or composites. That critique is fair. Calling a tenfold expansion in known inorganic crystals an order-of-magnitude expansion in all materials known to humanity is marketing. The underlying tool is still genuinely impressive. The language around it was not.
The deeper structural problem: in order to probe the limits of current scientific knowledge, we need data we do not already have, and AI cannot get that data on its own. Even the most promising AI-generated ideas could falter during real-world testing, because the validation has to be done in the physical lab. That is not a temporary limitation. That is physics. You cannot computationally validate a material's behavior under mechanical stress, thermal cycling, or electrochemical load. You have to build the thing.
In a 2026 interview, DeepMind's Demis Hassabis shared the same view: current AI systems cannot yet come up with genuinely new hypotheses or new ideas about how the world works. He estimates we are five to ten years away from "true innovation and creativity" in AI science. That is a honest assessment from someone with every commercial incentive to overstate the case. It deserves more attention than it gets.
The drug discovery pipeline carries the same structural weight. Advancing a new drug therapy from concept to clinic averages 10 years and costs over $2.5 billion, with 90% of candidates failing in pre-clinical and clinical phases. AI can compress the front end of that pipeline dramatically. It cannot compress human biology. It cannot make a clinical trial run faster than the time required to observe outcomes. Robotics tightly integrated with AI now enables self-driving laboratories that accelerate design-make-test-learn cycles. But the forward-looking roadmap still centers on hybrid physics-AI strategies to derisk development and build trustworthy AI as a cornerstone of discovery. Note the word "trustworthy." That is still a work in progress.
The Deployment Problem Is Political, Not Technical
Both sides are wrong. The doomers who dismiss AI in science as pure hype are ignoring hundreds of peer-reviewed results and a genuine acceleration in the rate of hypothesis generation. The boosters who talk about AI scientists replacing human researchers are ignoring every structural, regulatory, and physical constraint that governs how a discovery becomes a product.
The leaps made by large language models have opened the door to a wider AI-for-science gold rush. Tech giants and investors funneled hundreds of millions into spinoffs such as Periodic Labs, Lila Sciences, and OpenAI for Science. That capital is real. It will produce results. The question is not whether AI accelerates discovery. It clearly does. The question is whether the regulatory frameworks, synthesis infrastructure, and validation pipelines can absorb that acceleration.
Right now they cannot. Patent law does not recognize AI inventors, which creates grand challenges for inventions emerging from AI-driven science, since if the inventions they generate remain unpatentable, funding for self-driving laboratories may be constrained. The FDA and EMA are only beginning to build frameworks for AI-designed therapeutics. The physical laboratory capacity to synthesize and validate millions of AI-predicted candidates does not exist at scale.
This is solvable. But not the way you think. The bottleneck is not algorithmic. A GPT-5 finding symmetries in black hole equations in the summer of 2025, a generative model designing 15 million drug candidates in weeks, self-driving labs running 800 reactions a day: the computation is running faster than we can handle the output. The constraint is infrastructure, regulation, and the stubborn physical time it takes to grow a crystal, run a trial, or synthesize a polymer. Those timelines are political and institutional problems, not engineering ones. And right now, the attention is all on the algorithm.