In a landmark advancement set to redefine the landscape of quantum computing, researchers at the Institute of Science Tokyo have unveiled a new class of quantum low-density parity-check (LDPC) error-correction codes that promise to scale quantum systems to unprecedented levels of sophistication and reliability. Achieving performance metrics that approach the theoretical hashing bound, these codes represent a major breakthrough in the pursuit of fault-tolerant quantum computers capable of handling hundreds of thousands of logical qubits efficiently.
Quantum computing has long been heralded as the next frontier for computational power, aiming to solve problems far beyond the reach of classical machines. Yet, despite impressive progress in manipulating quantum bits, or qubits, current devices grapple with formidable challenges. Quantum information is extraordinarily delicate, susceptible to decoherence and errors from environmental noise and operational imperfections. As the number of qubits increases, error rates typically escalate, severely limiting practical applications that require millions of qubits for meaningful simulation tasks in quantum chemistry, cryptography, and optimization.
Overcoming these hurdles necessitates sophisticated quantum error correction schemes. Unlike classical bits, qubits can suffer from both bit-flip and phase-flip errors, complicating correction efforts. Traditional methods rely heavily on codes with near-zero data rates, meaning vast physical qubit overheads are required to encode a small fraction of reliable logical qubits. This inefficiency has long been a bottleneck in scaling quantum processors to sizes necessary for practical computation.
The engineering challenge of stabilizing and controlling large numbers of qubits is exacerbated by short coherence times, noisy gate operations, limited qubit connectivity, and the extreme cooling requirements intrinsic to quantum hardware. Even if these hardware issues were mitigated in a hypothetical ideal machine, the field has faced a fundamental theoretical impasse: existing quantum error-correcting codes lack the sharp threshold phenomena and high coding rates that would unlock improved performance as system size grows.
Enter the novel approach developed by Associate Professor Kenta Kasai and his student Daiki Kawamoto at the Institute of Science Tokyo. Leveraging insights from classical information theory, they constructed protograph LDPC codes defined over non-binary finite fields, a departure from conventional binary-based quantum LDPC codes. This structural innovation allows the encoding of more information per qubit and enhances decoding performance by avoiding detrimental short cycles within the code structure—common issues that degrade error correction in traditional designs.
Their method involves transforming these advanced LDPC codes into Calderbank-Shor-Steane (CSS) quantum codes, a well-established family that underpins most quantum error correction systems. This transformation harnesses the superior classical error correction capabilities of LDPC codes within a quantum framework, bridging a critical gap in code design that has limited scalability and performance in past research.
Crucially, the team introduced a sophisticated decoding strategy based on the sum-product algorithm, optimized for quantum systems to simultaneously address both bit-flip (X) and phase-flip (Z) errors. Unlike prior efforts that tended to correct these error types separately—often leading to suboptimal overall error suppression—this integrated approach enhances the code’s robustness against the full spectrum of quantum noise.
Extensive numerical simulations validated their theoretical constructs, revealing frame error rates as low as 10⁻⁴ even when scaling codes to hundreds of thousands of qubits. Such performance is remarkably close to the hashing bound, the ultimate benchmark for quantum error correction determined by information theory. Moreover, the decoding process exhibits computational complexity that scales linearly with the number of physical qubits, a pivotal feature that offers practical feasibility for real-world quantum computing implementations.
This work marks a significant paradigm shift, highlighting the potential to move beyond the historically resource-intensive regimes that have precluded large-scale quantum computation. By improving code rates to above 50% and ensuring scalable decoding efficiency, these LDPC quantum codes open pathways to constructing quantum systems with millions of logical qubits—a scale deemed necessary for breakthroughs in quantum simulation, secure communication, and advanced optimization.
Professor Kasai underscores the implications, emphasizing that this breakthrough paves the way for practical, fault-tolerant quantum architectures. It not only enhances the reliability of qubits over extended computation periods but also fundamentally changes the economic and engineering calculus of quantum device fabrication and operation. This development could compress timelines toward viable quantum advantage in scientific and industrial domains.
Beyond addressing pivotal theoretical challenges, the study also invigorates the quest for improved quantum hardware by linking advanced error correction to scalable device engineering requirements. With more efficient codes, demands on coherence times and gate fidelities could be relaxed, potentially speeding up the integration of quantum processors into practical systems.
The research, published in the journal npj Quantum Information, showcases the promise of integrating classical coding theory with quantum mechanics to surmount longstanding barriers in quantum error correction. It highlights the interdisciplinary nature of quantum technologies, drawing expertise from information theory, quantum physics, and computational science to realize novel solutions for the next generation of computational machines.
As the quantum computing ecosystem evolves, these new LDPC quantum error correction codes underscore the critical importance of algorithmic and code-based innovations alongside hardware advancements. The team’s findings deliver a roadmap for tackling the intertwined challenges of noise, scale, and computational overhead, propelling the field closer to achieving reliable, large-scale quantum information processing.
This breakthrough stands as a testament to the rapidly advancing frontier of quantum science at the Institute of Science Tokyo, a recently formed institution born from the merger of Tokyo Medical and Dental University and Tokyo Institute of Technology. Their commitment to advancing scientific knowledge with societal value is exemplified by this milestone, promising to catalyze future research and applications in quantum computation and beyond.
Subject of Research: Not applicable
Article Title: Quantum Error Correction Near the Coding Theoretical Bound
News Publication Date: 29-Sep-2025
Web References: http://dx.doi.org/10.1038/s41534-025-01090-1
References: Kenta Kasai and Daiki Kawamoto. “Quantum Error Correction Near the Coding Theoretical Bound.” npj Quantum Information, September 29, 2025.
Image Credits: Institute of Science Tokyo, Japan
Keywords: Quantum computing, Applied mathematics, Computational science, Boson sampling, Qubits, Quantum walks, Quantum information, Information science, Quantum information processing, Quantum processors