On March 13, Great Sky announced a superconducting neuromorphic architecture with chips already taped out, claiming orders-of-magnitude efficiency gains over silicon GPUs. The University of Sydney published a photonic chip running neural computations in picoseconds at 90-99% accuracy. Both announcements landed within the same week. That is not a coincidence. That is a field hitting an inflection point.
So let me give you the number that haunts every GPU cluster engineer alive right now: the human brain runs roughly 1 exaFLOP at 20 watts. The Frontier supercomputer, currently ranked second globally, also delivers approximately 1 exaFLOP. At 21 megawatts. That is a 1-million-times power gap for equivalent peak compute. Intel's Loihi 2 and IBM's TrueNorth already demonstrate 1,000x better efficiency than conventional processors on cognitive tasks. The physics is not speculation. It is measurement.
The Part Nobody Wants to Say Out Loud
No neuromorphic system has operated at supercomputer scale. Not one. Every efficiency number you see applies to prototype chips running constrained workloads, not to the full system stack that includes cryogenic cooling, optical interconnects, networking overhead, and power distribution. Great Sky's architecture requires cryogenic operation. That adds real engineering complexity and real energy cost. The Sydney photonics work is brilliant, but it classified MRI images, not trained a 400-billion-parameter language model.
Critics who point this out are not wrong. Prototype efficiency and system-level efficiency are genuinely different things, and the history of computing is full of architectures that looked miraculous in a lab and disappeared before production. I take that seriously.
But here is what the skeptics are missing: the urgency of the problem is now forcing the engineering. Morgan Stanley analysts estimate AI demand could create a 13-gigawatt U.S. power deficit by 2028. A March 2026 GCAM model projects U.S. AI data center electricity consumption hitting 420 terawatt-hours by 2030 and 830 terawatt-hours by 2050, which is roughly 10% of all U.S. electricity. When the grid bill gets that big, the cryogenic cooling problem stops being an academic obstacle and becomes a funded engineering program.
What the Engineers Need to Prove
The honest engineering question is not whether neuromorphic systems are theoretically efficient. They are. The question is whether the system-level overhead of novel substrates, photonic interconnects, and cryogenic operation consumes the efficiency advantage before it ever reaches the power meter on the outside of the building. We do not have that answer yet. The Great Sky tape-out is a critical milestone because chips in hand are data, and data beats press releases every single time.
What needs to happen now is direct: DARPA, DOE, and the national labs should be funding full-system benchmarks at the 10-to-100 petaFLOP range, not just chip-level demonstrations. The neuromorphic teams at Intel, IBM, Great Sky, and the Sydney photonics group have earned serious resources. Give them a real supercomputer-class workload and measure the full power draw including every watt of overhead. That test either validates the efficiency claims at scale or tells us exactly where the losses are. Either result is useful. Engineers do not fear the test. They run it.
The brain has been doing exascale compute at 20 watts for 300,000 years. The only question worth arguing about is how long it takes us to build something comparable. Given what landed in two research announcements last week, the timeline just got shorter.