Somewhere in 2025, quantum computing stopped being a physics experiment and became a stock narrative. The Defiance Quantum ETF returned 103.9% in a single year. Funding doubled to $3.77 billion in just the first 3 quarters. Meanwhile, the actual machines still cannot solve a practical problem that a classical computer cannot also solve, faster and cheaper. That is the gap investors are not pricing.
Still Noisy, Still Intermediate, Still Scaled Wrong
Every working quantum system today sits in what researchers call the NISQ era: Noisy Intermediate-Scale Quantum. That label is not marketing. It is an engineering status report. Qubit counts are climbing. Error rates are falling. But fault-tolerant systems, the kind needed to run the algorithms that actually threaten classical computing, do not exist yet. IBM is targeting fault-tolerant modules by 2027. Google puts its milestone closer to 2029. The Quantum Insider, reviewing the field this week, put widespread adoption at 10 to 20 years out.
Think about what that means in engineering terms. A 2025 investor buying QTUM is pricing in breakthroughs that the engineers building these machines say are still a decade away. That is not cautious optimism. It is a timeline mismatch, and timeline mismatches in aerospace have a name: they are called schedule slips, and they are expensive.
Seth Earley, who studies enterprise technology adoption, described quantum as stuck in the "pilot that works in a controlled lab" phase, facing a pilot-to-production gap that early AI companies also stumbled through. That comparison is instructive, but only partly. AI had a clear near-term application path by the time the money arrived. Quantum still lacks mature software ecosystems, workable data-loading architectures, and enough error-corrected qubits to run Shor's algorithm at meaningful scale. The hardware roadmap is real. The application layer is mostly aspiration.
DARPA Is Asking the Right Question
DARPA expanded its Quantum Benchmarking Initiative this month specifically to separate verifiable progress from vendor claims. That is a bureaucracy admitting publicly that it cannot tell what is real. When the agency funding advanced weapons research needs its own hype detector, the field has a credibility problem.
Here is the fair point for the optimists: classified government research, particularly at NSA, may have advanced quantum capabilities well beyond what the public literature shows. Dr. Daniel Conway has noted that opacity fuels both hype and legitimate skepticism in equal measure. I cannot dismiss that. Some of what looks like a hype gap may be a visibility gap instead.
But here is what I know how to evaluate: public engineering milestones, error rates, qubit coherence times, and algorithm benchmarks. By those metrics, the machines are making real progress. Quantum simulation for chemistry problems looks practical within 5 to 10 years. That is genuinely exciting. The teams doing that work deserve credit for grinding through one of the hardest engineering problems in physics.
What they do not deserve is a market narrative that skips from "error rates declining" to "will defeat classical computing soon" without acknowledging the enormous unsolved problems sitting between those 2 points. Investors following the AI-quantum synergy story are essentially buying a rocket that has completed a successful static fire and calling it orbital. The static fire is real progress. Orbit is still a different mission.
Retail investors should wait for IBM's 2027 fault-tolerant milestone before treating QTUM as anything other than a speculative bet. Quantum teams should keep iterating. The physics is not a lie. Only the schedule is.