Directionally Informed BP Decoding
- The paper demonstrates that directionally informed BP decoding assigns orientation weights to Tanner graph edges in CSS codes to exploit device and noise anisotropies.
- It integrates site-dependent log-likelihood ratios into the existing BP→OSD pipeline without changes to code structure, ensuring modularity in hardware-aware quantum error correction.
- Empirical results indicate up to 100× logical error rate improvement in codes like the toric and planar NE3N through optimal tuning of the bias parameter β.
Directionally informed belief propagation (BP) decoding is a formal and empirically validated framework for quantum Calderbank-Shor-Steane (CSS) codes that leverages anisotropies in device architecture, scheduling, or noise by assigning orientation weights to Tanner-graph edges and feeding site-dependent log-likelihood ratios (LLRs) into standard BPOSD decoders. This approach, parameterized by a single scalar bias , yields significant performance improvements without altering code construction or the underlying decoder implementation, providing an efficient route to hardware-aware quantum error correction (Rowshan, 12 Jan 2026).
1. Directional Annotation, Per-Qubit Weights, and Weighted Metrics
Directionally informed BP begins with a CSS code specified by parity-check matrices and , satisfying . The corresponding Tanner graphs for and checks are augmented with nonnegative orientation weights: and , supported on their respective edges.
Per-qubit directional weights, , are obtained via incident-edge summation: where denote adjacent and checks for qubit . For an error indicator , the directional metric is
which generalizes the standard Hamming cost to a weighted form, capturing directional bias inherent in the physical or logical code geometry.
2. Directional Degeneracy Classes and Their Enumeration
Quantum decoders operate on degeneracy classes, as distinct -errors may share a -syndrome, differing only by stabilizers. The set of degeneracy classes for syndrome is
with the nullspace of and the row span of . Each class is assigned its minimal directional cost,
The directional degeneracy enumerator, parameterized by bias , aggregates class scores: For , recovers the standard count for logical qubits. As increases, classes with lower directional cost dominate, concentrating error correction along preferred directions. The enumerator enables analytic tail bounds, e.g.,
which quantifies how directional metrics thin low-cost degeneracy and enhance logical discrimination in BP decoders.
A global generating function over cosets is defined as
A MacWilliams-type identity expresses it via the dual code : This factorization supports gradient evaluation and analytic bounding.
3. Mapping Orientation Weights to Site-Dependent LLRs
In the memoryless channel model, MAP decoding seeks error patterns minimizing . If the true error probabilities are not available, the directional weights act as proxies, tilting a uniform baseline to site-dependent priors: The parameter modulates the directional bias: yields isotropic priors ; increasing enhances the effect of large , selectively steering BP inference toward error patterns aligned with device or noise anisotropies.
4. Bounds on Directional Distance and Degeneracy Class Reduction
Directional annotation impacts code distances and class counts. Let be code or distance and the minimal stabilizer weight. With , , directional distances are bounded via
for stabilizer and logical operators, respectively. Directionality also reduces the number of eligible degeneracy classes. For a cost threshold ,
where is code rate, , and quantifies concentration as the directional bias increases. This reflects how anisotropic annotation "breaks" degeneracy clusters, sharpening logical error selection.
5. Algorithmic Integration with BP→OSD Pipelines
Directional LLRs integrate seamlessly into conventional BP→OSD pipelines. The decoding algorithm proceeds as follows:
| Step | Operation | Output/Usage |
|---|---|---|
| 1 | Compute per-qubit from | Directional weights |
| 2 | Determine and | Site-dependent LLRs |
| 3 | Run BP on (, ) for iterations with | Tentative error estimates |
| 4 | Run OSD (order ) on tentative solutions, ranking candidates with | Final error pattern selection |
| 5 | Combine and corrections | Syndrome-resolved correction |
Notably, aside from computing the directional LLRs, no aspects of code definition, BP/OSD implementation, or syndrome processing are altered, preserving modularity and code-agnostic deployment.
6. Empirical Performance: Finite-Length Evidence
Simulations under code-capacity noise were conducted for representative quantum codes:
- The toric code (checkerboard layout, gradient), and
- The planar NE3N code (rectangular lattice, horizontal gradient ).
For the toric code over , directionally weighted BP+OSD(2) decreased logical error rates by – compared to isotropic BP+OSD(2). As a function of , performance exhibits a U-shaped dependence, with moderate (typically 1–3) yielding optimal gains; excessive tilting can be detrimental. The NE3N code displayed roughly an order-of-magnitude improvement over isotropic decoders across relevant error rates.
These enhancements incur zero architectural cost: identical BP/OSD infrastructure and code, the only change being the LLRs and candidate selection criteria.
7. Hardware-Aware Insights and Future Directions
Physical device layouts commonly induce anisotropies: control wiring, readout ordering, interaction directionality, and transport effects can bias error occurrence along axes. Such calibration data can be directly mapped to , informing the decoding pipeline.
With a single bias parameter controlling the strength of directionality, practical decoder tuning and cross-validation are straightforward. Theoretical results—including bounds on directional distances, degeneracy reduction via enumerators, and dual-domain analytic frameworks—furnish rigorous guidance on admissible tilt before loss of logical distance or code performance.
Absent geometric embedding or with fully isotropic noise, misaligned directional bias can degrade decoding. Nonetheless, in realistic settings with moderate physical bias (e.g., dephasing bit-flip) or geometric complexity, modest tilt affords substantial error rate reductions at minimal engineering expense.
Prospective research directions include data-driven learning of or via gradient-based optimization (e.g., ), extension to circuit-level or correlated noise, and synergy with Pauli-bias-tailored codes for multidimensional anisotropy.
Directionally informed BP decoding constitutes a lightweight, rigorously developed, and empirically validated approach to quantum decoding, leveraging anisotropy for enhanced logical error rates without necessitating code or decoder modifications (Rowshan, 12 Jan 2026).