Theoretical basis for discrete encodings changing neural-network learning dynamics
Establish a formal mathematical theory explaining why discrete binary encodings of continuous inputs—specifically Normalized Base-2 Encoding (NB2E)—fundamentally change neural-network learning dynamics and enable position-independent bit-phase internal representations that support extrapolation of periodic functions beyond the training domain, in contrast to Fixed Fourier Encoding and standard continuous numerical inputs; and delineate the conditions under which such binary encodings enable and guarantee extrapolation.
References
While we have characterized when and how this capability emerges empirically, the theoretical question of why discrete encodings fundamentally change the learning dynamics remains open. Understanding the mathematical principles underlying bit-phase learning and formalizing the conditions under which binary representations enable extrapolation are important directions for future work.