Theoretical basis for discrete encodings changing neural-network learning dynamics

Establish a formal mathematical theory explaining why discrete binary encodings of continuous inputs—specifically Normalized Base-2 Encoding (NB2E)—fundamentally change neural-network learning dynamics and enable position-independent bit-phase internal representations that support extrapolation of periodic functions beyond the training domain, in contrast to Fixed Fourier Encoding and standard continuous numerical inputs; and delineate the conditions under which such binary encodings enable and guarantee extrapolation.

Background

The paper introduces Normalized Base-2 Encoding (NB2E), which encodes continuous values into binary vectors and demonstrates that vanilla multi-layer perceptrons trained with NB2E can extrapolate diverse periodic signals beyond the training range. In controlled comparisons, Fixed Fourier Encoding (FFE) and continuous numerical inputs fail to extrapolate under identical architectures and training procedures.

Activation analyses suggest NB2E induces bit-phase internal representations that are largely independent of positional coordinates, which appears to underpin extrapolation. Despite extensive empirical characterization, the authors highlight that a rigorous theoretical understanding of why discrete encodings (like NB2E) alter learning dynamics and enable extrapolation, and under what conditions this occurs, is currently lacking.

References

While we have characterized when and how this capability emerges empirically, the theoretical question of why discrete encodings fundamentally change the learning dynamics remains open. Understanding the mathematical principles underlying bit-phase learning and formalizing the conditions under which binary representations enable extrapolation are important directions for future work.

Extrapolation of Periodic Functions Using Binary Encoding of Continuous Numerical Values  (2512.10817 - Powell et al., 11 Dec 2025) in Section 6 (Conclusions)