When talking about quantum computers, the same underlying tension keeps resurfacing: should we aim directly for the perfect machine, or try to extract value as early as possible from imperfect devices?
The article More than one way to skin Schrödinger’s cat, published in IEEE Spectrum, captures this debate particularly well, showing that there is currently no single, universally accepted path toward large-scale quantum computing.
This is not just a technical disagreement. It reflects two different mindsets: one deeply rooted in device physics, the other focused on computation and practical use.
The “levels” approach: neat, but controversial
Microsoft proposes a structured, level-based view of quantum computing development, with a clear final goal: a fully error-corrected, universal, programmable quantum computer.
It is a clean and reassuring roadmap, very much aligned with how we usually think about progress in classical computing.
The implicit message, however, is strong: until full error correction is achieved, we are not really dealing with “useful” quantum computers, only with intermediate prototypes.
Not everyone in the industry agrees with this framing.
IBM’s perspective: fewer labels, more practical questions
Jerry Chow from IBM challenges this level-based narrative. In his view, it is too focused on device physics and not enough on computation itself.
The key question should not be “what level are we at?”, but rather:
what can we actually do today with the quantum circuits we already have?
IBM does not deny that a fully fault-tolerant machine is the ultimate goal. What it rejects is the idea that everything before that point is merely a waiting phase. The approach is deliberately pragmatic:
• identify real use cases for noisy but controllable devices
• rely on error suppression techniques rather than full error correction
• gain hands-on experience with algorithms, software, and hybrid classical–quantum workflows
It is not about lowering ambitions, but about reaching them step by step.
The real cost of error correction
This discussion connects directly with quantum simulation and architectures like Transversal STAR. The main bottleneck is not simply the number of qubits, but the cost of making operations reliable.
Full error correction requires:
• massive qubit overhead
• very long and fragile circuits
• extremely complex control systems
If we decide that quantum computers only become “serious” after all of this is solved, we implicitly accept that practical applications will be delayed for many years.
Why neutral atoms attract so much attention
This is where neutral-atom platforms, developed by companies such as QuEra Computing and Atom Computing, come into play.
They are not attractive because they are perfect, but because they offer a crucial advantage: scalability.
Justin Ging from Atom Computing puts it bluntly: if there is one word that defines the key benefit of neutral atoms, it is scalability.
Concretely:
• large arrays of atoms can be trapped in a single vacuum chamber
• system geometry is highly flexible
• wiring and cryogenic constraints are far less severe than in other platforms
Both QuEra and Atom Computing openly discuss the possibility of reaching 100,000 atoms in a single device within the next few years. This is not yet a universal, fault-tolerant quantum computer, but it is already a scale that becomes scientifically interesting.
Not one path, but many
The central message of the IEEE Spectrum article is clear: there is no single “correct” way to build a useful quantum computer.
• Microsoft emphasizes a structured roadmap toward perfection
• IBM focuses on progressive usefulness
• QuEra and Atom Computing bet on physical scalability
These approaches are not mutually exclusive. The most likely scenario is that the field will advance along multiple parallel paths, with practical results appearing first in specialized domains and only later in fully general-purpose machines.
A familiar historical pattern
For anyone familiar with the history of classical computing, this debate sounds very familiar.
Early computers were:
• unreliable
• difficult to program
• usable only by experts
Yet they were used anyway, because they enabled things that were previously impossible. Theory and refined engineering followed practical experience, not the other way around.
Demanding a perfect quantum computer before using it risks repeating the same mistake.
A personal reflection
In my view, the real value of this discussion lies in its growing maturity. There is less hype and more attention to real-world trade-offs. The idea that quantum computing should prove its usefulness before becoming fully fault-tolerant is not a shortcut—it is a historical necessity.
Neutral atoms, with all their limitations, appear to offer one of the most credible bridges across this intermediate phase: not the final destination, but a concrete path from the lab to real scientific applications.
If quantum computing fulfills even part of its long-standing promises, it is unlikely to do so through a single, linear roadmap. More realistically, it will happen by skinning Schrödinger’s cat in more than one way.