Quantinuum Makes Another Milestone on Commercial Quantum Roadmap

2 hours ago 2

Over the past decade, it has been interesting to watch companies push the rock that is quantum computing up the steep hill. As Yuval Boger, chief commercial officer for QuEra told us this summer, it was only five or six years ago that people were still asking if building a quantum computer was even possible. Now, in 2025, the answer is yes.

Breakthroughs have been made in processor technology, error correction capabilities, and qubits, from the numbers that can be run in a system to technologies to make them stable enough to do real work. Most recently, IBM touted a breakthrough in reducing errors by running its error-correction algorithm on AMD FPGAs, another step toward commercializing the powerful systems. Also last month, Google announced its own quantum error-correction algorithm with its Willow chip, demonstrating what the IT giant said is a “verifiable quantum advantage,” performing a task that a classical computer can’t, or doing so more quickly and efficiently.

Such advancements move the idea of quantum computing as a viable commercial computing platform closer to full reality, according to the companies behind them. D-Wave already has a business with its Advantage2 annealing quantum systems, not only making them available via the cloud but also now selling them to organizations to run on-site.

Quantinuum’s Third Generation

Now Quantinuum steps back on center stage. The company in 2020 unveiled the first iteration of its quantum system, H1, which reached 20 trapped-ion qubits in two years. In 2023, H2 arrived, and a year later upgraded it to include 56 qubits (up from 32) and demonstrated what company executives said reached quantum advantage with the Random Circuit Sampling (RCS) algorithm.

Quantinuum, which is controlled by multinational Honeywell, this week is introducing Helios, the successor to H2 that includes 98 fully connected physical trapped-ion qubits, fidelity of more than 99.9%, a new real-time control engine, and a Python-based programming language – Guppy – that will make it possible for developers to create programs that include both quantum and classical capabilities. Organizations can now access Helios either through Quantinuum’s cloud or by bringing it on premises. There also is an option to integrate Nvidia’s GB200 accelerators through the GPU maker’s NVQLink, a technology Nvidia introduced two weeks ago to create a direct link from quantum processors to its GPUs.

The new design and software stack will make programming for quantum computing almost as easy as classical computing, which the company expects will grow commercial adoption of the quantum. Executives pointed to a two-month time leading up to the system’s debut when companies like SoftBank and JPMorgan Chase used it to run commercial research initiatives. In addition, Helios can run large-scale simulations in high-temperature conductivity and quantum magnetism, other steps on the path to commercial applications.

Other companies already working with Helios include Amgen for biologics research, BlueQubit for AI image recognition, and BMW Group for materials research on fuel cell catalysts.

“Right before Helios came out, there were very few machines out there that could do anything that was very interesting,” David Hayes, director of computational theory and design, tells The Next Platform. “H2 was one of those machines. H2 has a couple of quantum advantage claims in quantum simulation, but H2 was just on the edge of quantum advantage. We do think it got there, but it was close. But now Helios has almost two times the number of qubits.”

Who Has Quantum Advantage?

While Helios has twice the qubits than H2, Hayes notes that power grows exponentially, so the additional qubits doesn’t mean it’s twice as possible.

“Helios is pretty firmly into the quantum advantage regime,” he says. “There’s going to be a lot more science you can do with these machines, like quantum simulation-type science, like the superconductor problem … Helios can really make a dent in what’s possible. Now, looking to the future, I think Helios is going to do some really interesting work in error correction and set new standards there for what can be done. It’s very early, but I think it’s going to happen. We already have some results and the machine’s only two months old.”

This has been a busy fall for Quantinuum, which had an $800 million oversubscribed funding round, with Fidelity International this week joining in, raising the company’s value to $10 billion. Also, with the Helios introduction, Quantinuum announced a strategic partnership with Singapore’s National Quantum Office that will see the two work to raise the Asian national’s standing as a global quantum computing hub and drive innovation in such industries as finance and pharmaceuticals.

Quantinuum also will install a Helios system in Singapore that will be completed next year – in the meantime, the researchers will have access to Helios via the cloud – and will create a new R&D and operations center in the country.

The Technology Under The Hood

Along with the 98 fully entangled qubits, Helios also includes 48 error-corrected logical qubits, reaching a 2:1 ratio, which Hayes says few people thought was possible, adding that “it’s early days, but we’re already getting pretty good numbers out of the machine for this error correction code.”

Another interesting note is Quantinuum’s introduction of a “junction” in the quantum processor that improves routing and reliability and will allow the company to scale its Quantum Charged Coupled Device (QCCD) architecture, with plans to create systems that include hundreds of junctions arranged to resemble a city street grid.

Its QPU design looks something like a key and works something like a spinning hard drive, send qubits through a ring storage, passing them through the junction and into a cache. From there, they are moved to logical zones for gating and then storage as the next group of qubits is processed. Operations within the chip can run in parallel.

“With H1 or H2, you had to kind of swap the qubits past each other,” Hayes said. “You could still get all to all connectivity, but it was a lot slower. Now we’ve kind of mastered this junction, which took a long time. The ion trap community was working on it for twenty years before it was working and our group is probably the only one that has one that’s working well enough to put into a commercial quantum computer. We can take that design and kind of stamp it out in a whole big chip and get a whole bunch of these junctions working together so that you can re-sort a bunch of ions really quickly and start making these computers quite a bit bigger. The forecast for scaling these machines just got a lot brighter with the successful integration of this junction.”

Below is a real image of 98 single barium atoms – atomic ions – used for computation inside Helios.

The Guppy language was developed to make programming in a fault-tolerant environment easier, he says. The Quantinuum developers made it look like Python, the most commonly used language because it’s easy to understand and program in. It’s also slow, so Guppy essentially looks like Python.

“It’s not actually Python,” Hayes says. “Under the hood, it’s really performant, and it’s more like Rust. It’s just kind of a modern C++. With that, they can make these high-level constructs like ‘if’ and ‘for’ loops that are kind of taken for granted by classical computing software developers but are actually really hard in quantum computing. To act on an ‘if’ statement, your compiler has to think really fast and see if the ‘if’ is satisfied and then decide what to do. But in a quantum computer, you can’t wait around because your qubits might decohere or something like that, so all this stuff has to be done lightning fast and Guppy does all this.”

Guppy also will be used by Quantinuum in Helios and future systems – the company’s roadmap shows “Sol” for 2027 and “Apollo” two years later – along with Nvidia’s CUDA-Q open source quantum development platform for real-time error correction. Hayes said the work could be done on FPGAs and ASICs, but GPUs offer better performance and flexible and users “can program any decoder or error correction you want in the machine and mix and match applications with different codes.”

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Read Entire Article