Papers
Topics
Authors
Recent
Search
2000 character limit reached

Directionally Informed BP Decoding

Updated 19 January 2026
  • The paper demonstrates that directionally informed BP decoding assigns orientation weights to Tanner graph edges in CSS codes to exploit device and noise anisotropies.
  • It integrates site-dependent log-likelihood ratios into the existing BP→OSD pipeline without changes to code structure, ensuring modularity in hardware-aware quantum error correction.
  • Empirical results indicate up to 100× logical error rate improvement in codes like the toric and planar NE3N through optimal tuning of the bias parameter β.

Directionally informed belief propagation (BP) decoding is a formal and empirically validated framework for quantum Calderbank-Shor-Steane (CSS) codes that leverages anisotropies in device architecture, scheduling, or noise by assigning orientation weights to Tanner-graph edges and feeding site-dependent log-likelihood ratios (LLRs) into standard BP\rightarrowOSD decoders. This approach, parameterized by a single scalar bias ββ, yields significant performance improvements without altering code construction or the underlying decoder implementation, providing an efficient route to hardware-aware quantum error correction (Rowshan, 12 Jan 2026).

1. Directional Annotation, Per-Qubit Weights, and Weighted Metrics

Directionally informed BP begins with a CSS code specified by parity-check matrices HXF2mX×nH_X\in\mathbb{F}_2^{m_X\times n} and HZF2mZ×nH_Z\in\mathbb{F}_2^{m_Z\times n}, satisfying HXHZT=0H_XH_Z^T=0. The corresponding Tanner graphs for XX and ZZ checks are augmented with nonnegative orientation weights: DXR0n×mXD_X\in\mathbb{R}_{\ge0}^{n\times m_X} and DZR0n×mZD_Z\in\mathbb{R}_{\ge0}^{n\times m_Z}, supported on their respective edges.

Per-qubit directional weights, w=(w1,,wn)\bm w=(w_1,\ldots,w_n), are obtained via incident-edge summation: wi:=jNX(i)DX(i,j)+jNZ(i)DZ(i,j)w_i := \sum_{j\in N_X(i)} D_X(i,j) + \sum_{j\in N_Z(i)} D_Z(i,j) where NX(i),NZ(i)N_X(i), N_Z(i) denote adjacent XX and ZZ checks for qubit ii. For an error indicator E{0,1}nE\in\{0,1\}^n, the directional metric is

Δw(E):=i=1nwiEi\Delta_{\bm w}(E) := \sum_{i=1}^n w_i\,E_i

which generalizes the standard Hamming cost to a weighted form, capturing directional bias inherent in the physical or logical code geometry.

2. Directional Degeneracy Classes and Their Enumeration

Quantum decoders operate on degeneracy classes, as distinct XX-errors may share a ZZ-syndrome, differing only by stabilizers. The set of degeneracy classes for syndrome sZs_Z is

DX(sZ)=(e0+CZ)/SX\mathcal{D}_X(s_Z) = (e_0 + C_Z) / S_X

with CZC_Z the nullspace of HZH_Z and SXS_X the row span of HXH_X. Each class [e][e] is assigned its minimal directional cost,

Δ([e]):=minuSXΔw(e+u)\Delta_*([e]) := \min_{u\in S_X} \Delta_{\bm w}(e + u)

The directional degeneracy enumerator, parameterized by bias ββ, aggregates class scores: ΓX(sZ;β):=[e]DX(sZ)exp(βΔ([e]))\Gamma_X(s_Z; \beta) := \sum_{[e]\in\mathcal{D}_X(s_Z)} \exp(-\beta\,\Delta_*([e])) For β=0\beta=0, ΓX\Gamma_X recovers the standard count DX(sZ)=2k|\mathcal{D}_X(s_Z)|=2^k for kk logical qubits. As β\beta increases, classes with lower directional cost dominate, concentrating error correction along preferred directions. The enumerator enables analytic tail bounds, e.g.,

{[e]:Δ([e])t}eβtΓX(sZ;β)| \{ [e]: \Delta_*([e]) \le t \} | \le e^{\beta t} \, \Gamma_X(s_Z;\beta)

which quantifies how directional metrics thin low-cost degeneracy and enhance logical discrimination in BP decoders.

A global generating function over cosets C=CXCZC = C_X \cap C_Z is defined as

Γ(w;α):=vCeαw,v\Gamma(\bm w; \alpha) := \sum_{v\in C} e^{\alpha\,\langle\bm w, v\rangle}

A MacWilliams-type identity expresses it via the dual code CC^\perp: Γ(w;α)=1CuCi=1n[1+(1)uieαwi]\Gamma(\bm w; \alpha) = \frac{1}{|C^\perp|}\sum_{u\in C^\perp} \prod_{i=1}^n \bigl[1 + (-1)^{u_i}e^{\alpha w_i}\bigr] This factorization supports gradient evaluation and analytic bounding.

3. Mapping Orientation Weights to Site-Dependent LLRs

In the memoryless channel model, MAP decoding seeks error patterns minimizing logPr(E)=iEilnpi/(1pi)-\log\Pr(E) = \sum_i E_i\ln{p_i/(1-p_i)}. If the true error probabilities pip_i are not available, the directional weights wiw_i act as proxies, tilting a uniform baseline p0p_0 to site-dependent priors: pi(β)=p0eβwi1nj=1neβwj,i(β)=ln1pi(β)pi(β)βwi+constp_i(\beta) = \frac{p_0\,e^{\beta w_i}}{\tfrac{1}{n}\sum_{j=1}^n e^{\beta w_j}}, \qquad \ell_i(\beta) = \ln \frac{1-p_i(\beta)}{p_i(\beta)} \approx \beta w_i + \text{const} The parameter ββ modulates the directional bias: β=0\beta=0 yields isotropic priors pi=p0p_i=p_0; increasing ββ enhances the effect of large wiw_i, selectively steering BP inference toward error patterns aligned with device or noise anisotropies.

4. Bounds on Directional Distance and Degeneracy Class Reduction

Directional annotation impacts code distances and class counts. Let dd be code XX or ZZ distance and dSd_S the minimal stabilizer weight. With wmin=miniwiw_{\min} = \min_i w_i, wmax=maxiwiw_{\max}= \max_i w_i, directional distances are bounded via

wmindSdwSwmaxdS,wminddwLwmaxdw_{\min}\,d_S \le d_{\bm w}^S \le w_{\max}\,d_S, \qquad w_{\min}\,d \le d_{\bm w}^L \le w_{\max}\,d

for stabilizer and logical operators, respectively. Directionality also reduces the number of eligible degeneracy classes. For a cost threshold δmax\delta_{\max},

Dδ(sZ)2kf(δmax,R)2n2dmin+2f(δmax,R)|\mathcal{D}_{\delta}(s_Z)| \le 2^k f(\delta_{\max}, R) \le 2^{n-2d_{\min} + 2} f(\delta_{\max}, R)

where R=k/nR=k/n is code rate, dmin=min(d,dS)d_{\min} = \min(d,d_S), and f(δmax,R)1f(\delta_{\max}, R)\le 1 quantifies concentration as the directional bias increases. This reflects how anisotropic annotation "breaks" degeneracy clusters, sharpening logical error selection.

5. Algorithmic Integration with BP→OSD Pipelines

Directional LLRs integrate seamlessly into conventional BP→OSD pipelines. The decoding algorithm proceeds as follows:

Step Operation Output/Usage
1 Compute per-qubit wiw_i from DX,DZD_X, D_Z Directional weights w\bm w
2 Determine pi(β)p_i(\beta) and i(β)\ell_i(\beta) Site-dependent LLRs
3 Run BP on (HXH_X, HZH_Z) for II iterations with {i(β)}\{\ell_i(\beta)\} Tentative error estimates EBPE_{\mathrm{BP}}
4 Run OSD (order tt) on tentative solutions, ranking candidates with Δw(E)\Delta_{\bm w}(E) Final error pattern selection
5 Combine XX and ZZ corrections Syndrome-resolved correction

Notably, aside from computing the directional LLRs, no aspects of code definition, BP/OSD implementation, or syndrome processing are altered, preserving modularity and code-agnostic deployment.

6. Empirical Performance: Finite-Length Evidence

Simulations under code-capacity noise were conducted for representative quantum codes:

  • The toric code [[162,2,9]][[162,2,9]] (checkerboard layout, cixic_i \sim x_i gradient), and
  • The planar NE3N [[36,4]][[36,4]] code (rectangular 18×418\times4 lattice, horizontal gradient cixic_i \sim x_i).

For the toric code over p0[103,102]p_0\in[10^{-3},10^{-2}], directionally weighted BP+OSD(2) decreased logical error rates PLP_L by 10×10\times100×100\times compared to isotropic BP+OSD(2). As a function of ββ, performance exhibits a U-shaped dependence, with moderate ββ (typically 1–3) yielding optimal gains; excessive tilting can be detrimental. The NE3N code displayed roughly an order-of-magnitude improvement over isotropic decoders across relevant error rates.

These enhancements incur zero architectural cost: identical BP/OSD infrastructure and code, the only change being the LLRs and candidate selection criteria.

7. Hardware-Aware Insights and Future Directions

Physical device layouts commonly induce anisotropies: control wiring, readout ordering, interaction directionality, and transport effects can bias error occurrence along axes. Such calibration data can be directly mapped to DX,DZwiD_X, D_Z \rightarrow w_i, informing the decoding pipeline.

With a single bias parameter ββ controlling the strength of directionality, practical decoder tuning and cross-validation are straightforward. Theoretical results—including bounds on directional distances, degeneracy reduction via enumerators, and dual-domain analytic frameworks—furnish rigorous guidance on admissible tilt before loss of logical distance or code performance.

Absent geometric embedding or with fully isotropic noise, misaligned directional bias can degrade decoding. Nonetheless, in realistic settings with moderate physical bias (e.g., dephasing \gg bit-flip) or geometric complexity, modest tilt affords substantial error rate reductions at minimal engineering expense.

Prospective research directions include data-driven learning of DX,DZD_X, D_Z or wiw_i via gradient-based optimization (e.g., wilogΓ\partial_{w_i}\log\Gamma), extension to circuit-level or correlated noise, and synergy with Pauli-bias-tailored codes for multidimensional anisotropy.

Directionally informed BP decoding constitutes a lightweight, rigorously developed, and empirically validated approach to quantum decoding, leveraging anisotropy for enhanced logical error rates without necessitating code or decoder modifications (Rowshan, 12 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Directionally Informed Belief Propagation (BP) Decoding.