Soon, fault-tolerant quantum computers with millions of qubits will revolutionize science and industry. However, the challenge lies in ensuring that these building blocks operate with sufficient accuracy to avoid errors. Currently, the error rates of qubits are around 1 in 10,000 to 1 in 1,000. To tackle industrially relevant problems, we need to reduce this range from 1 in 109 to 1 in 106. The question arises, how can we accomplish this monumental task?
Despite the difficulty of squeezing such high levels of performance from Google’s current physical qubits, the Google Quantum AI team has developed a roadmap that has guided its research in recent years. The team’s approach gradually enhances the capabilities of the company toward the goal of achieving a fault-tolerant quantum computer.
Then came exciting news in the field of quantum computing! In a recent publication in Nature titled “Suppressing Quantum Errors by Scaling a Surface Code Logical Qubit,” Google shared its latest breakthrough in achieving the second milestone in its roadmap. The Google Quantum AI team has successfully demonstrated a prototype of a logical qubit, which represents the fundamental unit of an error-corrected quantum computer. Through these experimental results, the research team has shown that the performance of this prototype approaches the range required for scalable, fault-tolerant quantum computing.
From Physical Qubits to Logical Qubits
Quantum Error Correction (QEC) marks a significant departure from current quantum computing techniques, in which each physical qubit serves as a computational unit. Instead, QEC offers a path to achieve lower error rates by combining multiple good qubits into a single, superior logical qubit. By encoding information across multiple physical qubits, the resulting logical qubit becomes more robust and capable of performing large-scale quantum algorithms. Moreover, as more physical qubits are used to construct a logical qubit, its performance, and reliability can be further enhanced.
Despite the promise of QEC, there is a major hurdle to overcome: if the errors introduced by each additional physical qubit outweigh the benefits of error correction, QEC will not be effective. Until now, this has been a significant barrier to progress in the field of quantum computing, given the high error rates of physical qubits.
“It is really a full system problem and you will be limited by the weakest link. We need high-quality, coherent qubits (a lot of them), good connectivity, good yield, and demonstrate low-error operations — that is quite a task. After our first milestone, we came up with this roadmap to strategically figure out how to tackle them (issues). We communicate this roadmap more broadly so that others in the industry know how to align their development roadmap,” according to Dr. Julian Kelly, Director of Quantum Hardware at Google Quantum AI, adding that what makes their work harder is the fact that there are so many new technologies now that need to work together at the same time.
To address this issue, the Google Quantum AI team utilized a specific error-correcting code known as a surface code and demonstrated for the first time that increasing the size of the code can decrease the error rate of logical qubits. This achievement was made possible by carefully addressing a multitude of error sources scaled from 17 to 49 physical qubits. The findings are a testament to the potential of producing the logical qubits necessary to realize large-scale, error-corrected quantum computing, with sufficient care and attention to detail.
QEC with Surface Codes