Papers
Topics
Authors
Recent
Search
2000 character limit reached

Neural Decoders for Universal Quantum Algorithms

Published 14 Sep 2025 in quant-ph | (2509.11370v1)

Abstract: Fault-tolerant quantum computing demands decoders that are fast, accurate, and adaptable to circuit structure and realistic noise. While ML decoders have demonstrated impressive performance for quantum memory, their use in algorithmic decoding - where logical gates create complex error correlations - remains limited. We introduce a modular attention-based neural decoder that learns gate-induced correlations and generalizes from training on random circuits to unseen multi-qubit algorithmic workloads. Our decoders achieve fast inference and logical error rates comparable to most-likely-error (MLE) decoders across varied circuit depths and qubit counts. Addressing realistic noise, we incorporate loss-resolving readout, yielding substantial gains when qubit loss is present. We further show that by tailoring the decoder to the structure of the algorithm and decoding only the relevant observables, we can simplify the decoder design without sacrificing accuracy. We validate our framework on multiple error correction codes - including surface codes and 2D color codes - and demonstrate state-of-the-art performance under circuit-level noise. Finally, we show that the use of attention offers interpretability by identifying the most relevant correlations being tracked by the decoder. Enabling experimental validation of deep-circuit fault-tolerant algorithms and architectures (Bluvstein et al., arXiv:2506.20661, 2025), these results establish neural decoders as practical, versatile, and high-performance tools for quantum computing.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 1 like about this paper.

alphaXiv