Exponential Quantum Error Correction — Below Threshold!
Errors are one of the biggest challenges in quantum computing because qubits, the computing units of quantum computers, tend to exchange information quickly with their surroundings, making it difficult to protect the information needed to carrying out a calculation. Generally, the more qubits you use, the more errors will occur and the system will become classical.
Today in Naturewe published results showing that the more qubits we use in Willow, the more we reduce errorsand the more the system becomes quantum. We tested ever-larger arrays of physical qubits, moving from a grid of 3×3 encoded qubits to a 5×5 grid, then to a 7×7 grid – and each time, thanks to our latest advances in qubit correction. quantum errors, we were able to reduce the error rate by half. In other words, we achieved an exponential reduction in the error rate. This landmark achievement is known in the field as “subthreshold”: being able to reduce errors while increasing the number of qubits. You need to demonstrate that you are below the threshold to show real progress in correcting errors, and this has been an exceptional challenge since quantum error correction was introduced by Peter Shor in 1995.
Other scientific “firsts” are also involved in this result. For example, it’s also one of the first compelling examples of real-time error correction on a superconducting quantum system – crucial for any useful calculation, because if you can’t correct errors quickly enough, they ruin your calculation before it is finished. And this is a “beyond break-even” demonstration, where our arrays of qubits have longer lifetimes than individual physical qubits, an unfalsifiable sign that error correction improves the system in its whole.
As the first subthreshold system, it is the most compelling prototype of a scalable logic qubit built to date. This is a strong sign that very large, useful quantum computers can indeed be built. Willow brings us closer to running practical, commercially relevant algorithms that cannot be replicated on conventional computers.
10 seven billion years on one of today’s fastest supercomputers
To measure Willow’s performance, we used the random circuit sampling (RCS) reference. Pioneered by our team and now widely used as a standard in the field, RCS is classically the most difficult benchmark that can be performed on a quantum computer today. You can think of this as an entry point for quantum computing: it checks whether a quantum computer does something that couldn’t be done on a classical computer. Any team building a quantum computer must first check if it can beat classical computers on RCS; otherwise, there is strong reason to be skeptical about its ability to tackle more complex quantum tasks. We have systematically used this benchmark to assess progress from one generation of chips to the next – we reported the Sycamore results in October 2019 and again recently in October 2024.
Willow’s performance on this test is astonishing: he completed a calculation in less than five minutes, which would take one of today’s calculations. fastest supercomputers 1025 or 10 septillion years. If you want to write it down, it’s 10,000,000,000,000,000,000,000,000 years ago. This staggering number exceeds the time scales known in physics and far exceeds the age of the universe. This lends credence to the idea that quantum computing occurs in many parallel universes, consistent with the idea that we live in a multiverse, a prediction directed for the first time by David Deutsch.
These latest results for Willow, as shown in the chart below, are our best yet, but we will continue to improve.