CRC Concatenation Scheme
- CRC Concatenation Scheme is a design that integrates an outer cyclic redundancy check with an inner forward error-correcting code to enhance decoding performance at short to moderate block lengths.
- The scheme leverages list-based and reliability decoding techniques where the CRC filters out incorrect candidate codewords, significantly reducing undetected errors.
- Optimal performance is achieved through careful CRC polynomial selection and algorithm tuning, enabling near-ML decoding in applications like CA-Polar, turbo, and convolutional codes.
A CRC concatenation scheme refers to the systematic use of an outer cyclic redundancy check (CRC) code concatenated with an inner error-correcting code, such as a convolutional, turbo, or polar code. The CRC, while originally designed for error detection, plays a dual role: as an error detector for candidate codewords and, when combined with list or reliability-based decoding, as a tool for enhancing error-correcting performance—specifically at short or moderate block lengths where conventional decoders are suboptimal. The CRC concatenation paradigm is now fundamental in many high-performance finite-length coding schemes, as formalized in polar, convolutional, turbo, and generalized product code constructions.
1. Core Principles of the CRC Concatenation Scheme
A CRC concatenation scheme comprises two principal stages:
- Outer CRC Encoding: A CRC encoder computes CRC checksum bits for a block of information bits using a generator polynomial (degree ), typically via
so that the transmitted word is .
- Inner Forward Error-Correcting Code (FEC) Encoding: The -bit CRC-augmented word is encoded by an FEC such as a convolutional code, polar code, or turbo code, resulting in a codeword .
This serial concatenation is reflected in standards such as CA-Polar (5G NR), CRC-TBCC/ZTCC for short blocklengths, and multi-CRC product codes (Li et al., 2021, Sui et al., 2021, Yang et al., 2020, Chiu et al., 2016).
A typical decoding process involves list, sphere, or ordered-statistics decoding of the FEC, with the CRC acting as a filter to select plausible codewords or as an early-stopping/branch-pruning criterion.
2. Rationale and Performance Gains
The theoretical motivation for CRC concatenation is to address the gap between the frame error rate (FER) performance of practical decoders and the finite-blocklength channel coding bounds (e.g., RCU, normal approximation) in regimes where ML decoding is infeasible. Under finite-length and moderate SNR, canonical successive cancellation or Viterbi/BCJR decoding is sub-optimal for codes like polar or turbo code, respectively. Concatenating the CRC and performing list-based or reliability-based decoding enables practically achievable performance within a fraction of a dB of ML or the fundamental finite-length bound (Piao et al., 2019, Li et al., 2021, Yang et al., 2021).
In CRC-aided list decoding (such as CA-SCL for polar codes, S-LVD for convolutional codes), the CRC checks act as "genie" selectors within the list, reducing the chance that an incorrect candidate is output. This often leads to orders of magnitude improvement in undetected error rate (UER) for minimal overhead and complexity penalty (Yang et al., 2021, Baicheva et al., 2019, Sui et al., 2021, Yang et al., 2020).
3. CRC Selection and Distance-Spectrum-Optimality
The efficacy of CRC concatenation is highly influenced by the choice of generator polynomial. Optimality criteria are centered on maximizing the minimum distance , or more generally, distance-spectrum-optimality (DSO): maximizing the lowest output Hamming weight of CRC-divisible error patterns in the concatenated code (Yang et al., 2020, Yang et al., 2021, Sui et al., 2021, King et al., 2022, Lou et al., 2015).
The DSO CRC selection problem, for a fixed degree , information length , and codeword length , consists of:
- Enumerating all error events (irreducible error events, or IEEs) in the inner code up to a suitable weight threshold,
- For each candidate CRC of degree , checking which IEEs yield undetectable errors (i.e., are divisible by ),
- Selecting the polynomial which maximizes undetected minimum distance and, in case of ties, minimizes the number of such errors at that distance.
Efficient algorithms for TBCCs (tail-biting convolutional codes) and ZTCCs (zero-terminated CCs) exploit the cyclic-shift closure property of the trellis and dynamic programming to reconstruct all relevant error events (Yang et al., 2020, Lou et al., 2015).
4. Applications: Polar, Convolutional, Turbo, and Product Codes
CA-Polar Codes
In CA-Polar (CRC-aided polar) codes, the CRC is concatenated with the polar encoder output. List decoders, such as SCL, retain all candidate paths, and use the CRC to discard incorrect options. Optimized CA-Polar codes (degree-11, 16, 24 CRCs) achieve up to 0.1–0.2 dB gain at BLER compared to standardized CRCs (Baicheva et al., 2019). CA-HD (CRC-aided hybrid decoding) adapts between SCL and CRC-aided sphere decoding, achieving performance within 0.025 dB of the finite-blocklength bound at , (Piao et al., 2019).
CRC-TBCC, CRC-ZTCC (CRC-aided Convolutional Codes)
A CRC is concatenated with a (typically short, high-rate) CC, tailored by DSO selection, and decoded by serial list Viterbi decoding (SLVD) with CRC filtering. At moderate CRC degrees () and blocklengths (), these codes can approach the RCU bound within 0.1–0.4 dB (Sui et al., 2021, Yang et al., 2021). CRC selection using tail-biting-aware algorithms is essential for maintaining undetected error rates near theoretical minima (Yang et al., 2020).
Turbo Codes
For turbo-CRC codes, hybrid STD+OSD (standard turbo decoding plus reliability-based ordered statistics decoding) schemes fold the CRC constraint into the generator matrix during OSD, substantially reducing frame error rate and maintaining low undetected error probability, especially in very short blocklength regimes (Wei et al., 2020).
Multi-CRC and Product Codes
Multi-CRC schemes, as in segmented SCL or soft-concatenated polar products, distribute local and global CRCs over message partitions. Early local CRC checking allows aggressive pruning of lists or partial paths, reducing space and time complexity while incurring minimal performance loss ( dB for up to 85% complexity savings) (Chiu et al., 2016, Zhou et al., 2018, Bonik et al., 2012).
5. Decoding Algorithms and Complexity
CRC concatenation schemes are tightly coupled with advanced decoding schemes exploiting list and reliability structures:
- CA-SCL/CA-HD for Polar Codes: List-based path tracking and CRC filtering yields near-ML performance. Adaptive and hybrid CA-HD adapts complexity based on path validity and CRC checks (Piao et al., 2019).
- SLVD for Convolutional Codes: The dual-trellis SLVD algorithm produces candidate codewords in metric order, with fast CRC checks acting as rejection filters. The average complexity is directly related to the expected list rank ; as SNR increases, , and complexity converges to that of plain Viterbi (Yang et al., 2021, Sui et al., 2021).
- GRAND Decoding: The Guessing Random Additive Noise Decoding (GRAND) family of algorithms treats CRC codes as error-correcting codes, utilizing CRC membership checks at each guess step. Performance matches or exceeds traditional block codes such as BCH at ultra-short lengths (An et al., 2021).
- Early Termination and Segmentation: Multi-CRC or segmented CA-SCL/TCA-SCL prunes unlikely candidates early with local CRCs, reducing required space/time footprints with negligible performance impact (Chiu et al., 2016, Zhou et al., 2018).
6. Practical Guidelines, Parameter Selection, and Implementation
Designing effective CRC concatenation schemes requires:
- Selection of CRC degree balancing rate loss and /distance spectrum improvement. For , , in $10-24$ is typical (Baicheva et al., 2019, Li et al., 2021).
- Use of polynomials maximizing , minimizing undetected error multiplicity, and matching the message length range; tables for best polynomials exist for standard lengths and degrees (Baicheva et al., 2019, Lou et al., 2015).
- Tuning list size in CA-SCL/SLVD and abandonment thresholds in GRAND/OSD decoding to minimize complexity while approaching ML performance (Sui et al., 2021, An et al., 2021).
- Partitioning message blocks in multi-CRC/product codes according to channel reliability, with tailored CRC allocation based on virtual lengths or reliability metrics (Zhou et al., 2018, Chiu et al., 2016).
- Hardware implementation leverages the low-complexity, parallelizable nature of CRC computations (LFSRs, syndrome checks), and flexible reconfiguration for different degrees and code structures (An et al., 2021).
7. Extensions and Advanced Topics
- CRC concatenation schemes are effective for non-binary codes and orthogonal signaling channels. DSO CRCs for -ary alphabets and noncoherent 4-FSK maintain performance gaps of less than 0.6 dB to the RCU bound (King et al., 2022).
- In neural or BP decoding settings, the CRC factor graph is appended to the polar code’s factor graph, and extrinsic information passed between them, yielding measurable performance improvements (up to 0.5 dB at FRR ) with minimal latency overhead (Doan et al., 2018).
- The concatenation principle extends to any outer linear code: BCH, Reed-Solomon, and LDPC, where the outer code can be optimized for detection/correction in the context of the inner code’s error pattern structure (Doan et al., 2018, Lou et al., 2015).
CRC concatenation schemes represent a unifying design paradigm achieving near-optimal finite-blocklength error control with modest implementation requirements, underpinning performance in diverse contemporary communication systems (Piao et al., 2019, Li et al., 2021, Sui et al., 2021, An et al., 2021, Chiu et al., 2016, Zhou et al., 2018, Lou et al., 2015, Baicheva et al., 2019, King et al., 2022).