Papers
Topics
Authors
Recent
Search
2000 character limit reached

Quantized Fourier Features (QFF)

Updated 18 February 2026
  • Quantized Fourier Features (QFF) are quantized versions of Random Fourier Features that achieve drastic compression while maintaining kernel approximation quality.
  • They employ advanced methods like Lloyd–Max, Sigma–Delta, asymmetric, and tensorized quantization to balance bitrate, accuracy, and resource efficiency.
  • QFF techniques are applied in neural field models and high-dimensional kernel machines, enabling efficient learning with significant reductions in memory and computation.

Quantized Fourier Features (QFF) extend the widely used Random Fourier Features (RFF) paradigm by imposing quantization schemes on RFF embeddings, enabling drastic compression in storage and computation with theoretically bounded degradation in kernel approximation and empirical performance. QFF methods span low-bit Lloyd–Max quantization and its variants, Sigma–Delta and distributed noise-shaping quantization, asymmetric quantization for client–server and embedded devices, tensorized (quantized) Fourier feature constructions for expressivity, and recent adaptive binning for neural field representations. These frameworks exploit statistical properties of RFFs—particularly the parameter-independence of their marginal distribution—yielding quantizers that achieve near-optimal distortion, empirical kernel reconstruction, and downstream learning performance, with orders-of-magnitude reductions in resource requirements.

1. Mathematical Foundations of Quantized Fourier Features

Quantized Fourier Features are built upon the RFF approximation for shift-invariant kernels, notably the Gaussian (RBF) kernel. For u,vRdu,v\in\mathbb{R}^d, with u=v=1\|u\|=\|v\|=1, the Gaussian kernel is given by

Kγ(u,v)=exp(γ22uv2)=exp(γ2(1ρ)),ρ=uTv.K_\gamma(u,v) = \exp\left(-\frac{\gamma^2}{2}\|u-v\|^2\right) = \exp\left(-\gamma^2(1-\rho)\right),\quad \rho= u^T v.

RFF approximates Kγ(u,v)K_\gamma(u,v) through features

z(x)=2D[cos(w1Tx+b1),,cos(wDTx+bD)],z(x) = \sqrt{\frac{2}{D}}\,[\cos(w_1^T x + b_1),\dots,\cos(w_D^T x + b_D)],

with wiN(0,Id)w_i \sim N(0,I_d) and biUnif[0,2π]b_i \sim \mathrm{Unif}[0,2\pi]. Each coordinate Z=cos(γwTu+b)Z = \cos(\gamma\,w^T u + b) has marginal density

fZ(z)=1π1z2,z[1,1],f_Z(z) = \frac{1}{\pi\sqrt{1-z^2}},\quad z\in[-1,1],

which is independent of the Gaussian kernel bandwidth parameter γ\gamma due to the phase randomization inherent in RFF generation, as shown by convolution arguments (Li et al., 2021). This property is foundational for QFF algorithms.

2. Quantization Schemes: Lloyd–Max, LM2^2, Sigma–Delta, and Distributed Noise Shaping

QFF methodologies adopt several quantization designs to encode RFF vectors into compact representations.

Lloyd–Max (LM) Quantization:

Applies optimal (in mean-squared error) scalar quantization to z[1,1]z\in[-1,1] using the known density fZ(z)f_Z(z). With M=2bM=2^b quantization levels and thresholds {1=t0<t1<<tM=1}\{-1 = t_0 < t_1 < \dots < t_M = 1\}, the recursive LM equations optimize centroids μi\mu_i and thresholds tit_i by alternating minimization of

D1=E[(zQ(z))2]=11(zQ(z))2fZ(z)dz,D_1 = \mathbb{E}\big[(z-Q(z))^2\big] = \int_{-1}^1 (z-Q(z))^2 f_Z(z)\,dz,

where Q(z)Q(z) is the quantized output (Li et al., 2021).

LM2^2–RFF:

Targets quantization of z2z^2, optimizing squared errors in the high-similarity regime where z(u)z(v)z(u) \approx z(v). The procedure performs LM on s=z2s=z^2 under fZ2(s)=1πss2f_{Z^2}(s) = \frac{1}{\pi\sqrt{s-s^2}}, then symmetrizes and maps back to [1,1][-1,1], reducing error for applications sensitive to squared-cosine error, notably in vanilla (unnormalized) estimators.

Sigma–Delta and Distributed Noise-Shaping Quantization:

Sequential recursive schemes, such as first-order Sigma–Delta, quantize z[1,1]mz\in[-1,1]^m using feedback to achieve noise shaping: qi=argminvAzi+ui1v,ui=ui1+(ziqi),q_i = \arg\min_{v\in\mathcal{A}}|z_i + u_{i-1} - v|, \quad u_i = u_{i-1} + (z_i - q_i), with advanced schemes of order rr extending this to higher-order difference matrices (Zhang et al., 2021). Distributed noise-shaping employs nonlocal feedback with a parameter β(1,2)\beta\in(1,2) and constructs a condensed embedding via a linear transform. Both classes admit nonasymptotic uniform kernel approximation error bounds.

3. Theoretical Error, Bitrate, and Memory–Accuracy Tradeoffs

All QFF schemes provide explicit theoretical characterizations of distortion and memory usage:

  • Distortion vs. Bitrate: For quantizers with bit depth bb, both LM and noise-shaping quantizers achieve D1,D20D_1, D_2 \to 0 as bb\to\infty. In practice, as few as $2$ or $3$ bits suffice for 1%\leq 1\% increases in kernel regression or SVM error rates compared to full-precision RFFs (Li et al., 2021, Zhang et al., 2021).
  • Memory Complexity: Each data point requires mbmb bits; 2-bit quantization provides a 16×16\times storage saving over 32-bit float representations.
  • Kernel Estimate Error: For a quantized estimator

K^Q(u,v)=2mi=1mQ(zi(u))Q(zi(v)),\hat{K}_Q(u,v) = \frac{2}{m}\sum_{i=1}^m Q(z_i(u)) Q(z_i(v)),

mean and variance are controlled analytically by the quantizer distortion D1D_1; a normalized estimator further reduces variance, especially for 1-bit quantization.

  • Error Bounds: For Sigma–Delta quantizers, the error decays polynomially in the number of features mm, and exponentially in compaction and bit rate under combined compression (Zhang et al., 2021).

Empirical results consistently indicate that LM and noise-shaping QFFs outperform stochastic and naive sign quantization, especially at ultra-low bitrates.

4. Asymmetric and Adaptive Quantization Strategies

QFF extends to asymmetric random periodic features as shown in (Schellekens et al., 2020), where only one side of a kernel evaluation pipeline employs quantized features. For features q(t)=sign(cost)q(t)=\mathrm{sign}(\cos t) (square wave), a semi-quantized scheme with one side quantized and the other using the standard cosine map recovers the original kernel (up to known scaling) without expectation bias: E[q(ΩTx+ξ),cos(ΩTy+ξ)]=2πk(xy).\mathbb{E}[\langle q(\Omega^T x+\xi),\cos(\Omega^T y+\xi)\rangle] = \frac{2}{\pi}k(x-y). This exact recovery does not hold for symmetric quantization (both sides quantized), suggesting particular relevance in client–server and embedded inference scenarios. Uniform \ell_\infty error bounds are established in terms of the sample complexity and the mean Lipschitz smoothness of the periodic map.

This approach achieves order-of-magnitude bitrate reductions (e.g., 1-bit per entry) with negligible (<5%) accuracy degradation in SVM classification, especially when only the query or database side is quantized.

5. Quantized Fourier Features in Neural Field Representations

In neural field models such as Neural Image Representations, Neural Radiance Fields (NeRF), and Signed Distance Functions (SDF), QFF are used as a binning mechanism in the Fourier domain (Lee et al., 2022). Instead of being optimized for signal compression, here the quantization creates localized feature bins in the range of each Fourier feature. Key properties:

  • QFF partitions each sin(ωx)\sin(\omega x) or cos(ωx)\cos(\omega x) across MM bins, associating each bin with a small learnable vector.
  • Periodic binning is implemented efficiently via interpolation between bin vectors, exploiting the inherent periodicity of sinusoids; this allows smoothness to be controlled, and discontinuities are avoided by summing in the original feature.
  • The multiresolution nature is preserved, as high-frequency bins are naturally narrower.
  • Empirically, QFF reduces model size (up to 20%20\%), accelerates convergence (requiring an order of magnitude fewer steps for similar PSNR or Chamfer metrics), and maintains or improves quality compared to non-quantized Fourier encodings or hard spatial grids.

Typical parameter choices (for 3D NeRF): M=64256M=64\text{--}256 bins, N=816N=8\text{--}16 feature channels, L6L\simeq 6–$128$ frequencies. The QFF approach leads to fast high-frequency fitting and preserves network smoothness with minimal adjustments to standard MLP architectures.

6. Tensorized and Structured Quantized Fourier Features

Expanding beyond scalar quantization, (Wesel et al., 2023) introduces a tensorized ("quantized") decomposition approach for Fourier features useful in high-dimensional kernel machines. For each dimension, the set of MiM_i frequencies is factorized via radix-Q expansion, replacing the expensive tensor-product feature with a higher-order tensor of much smaller per-mode dimension (QQ):

  • Each standard Vandermonde vector v(i)(xi)v^{(i)}(x_i) is decomposed into s(i,1)(xi)s(i,Ki)(xi)s^{(i,1)}(x_i) \otimes \dots \otimes s^{(i,K_i)}(x_i), mapping original dd-way tensors of side MiM_i to (iKi)(\sum_i K_i)-way tensors of side QQ.
  • The model weights are themselves tensorized, e.g., in Tensor-Train or CPD structures, reducing memory while improving expressivity—manifested as a higher VC-dimension bound for the same parameter budget.
  • In large-scale regression tasks, quantized tensor network models (QTKM/QFF with TT structure) reach lower test error than both non-quantized TNs and kernel ridge regression at drastically lower parameter counts.
  • This tensorization regularizes learning by focusing model capacity on the most salient data-driven harmonics, and is practical for datasets with up to 10710^7 samples and moderate feature cardinality.

This paradigm requires all MiM_i to be factorizable as QKiQ^{K_i}, and optimization is nonconvex but manageable via established tensor network solvers.

7. Practical Recommendations, Limitations, and Prospects

Implementation Guidance:

  • Precompute bb-bit codebooks or bin-lookup tables, leveraging the universal γ\gamma-free density on [1,1][-1,1] for all RFF (Li et al., 2021).
  • For classical kernel machines: 1-bit LM-RFF with a normalized estimator is often within 2–3% of full performance, 2 bits achieves <1%<1\% degradation with 10×\sim 10\times memory savings.
  • In neural fields: choose bin count MM empirically, interpolate across periodic bins, and add back the original Fourier features for continuity. Binning at higher frequencies improves fine detail without discontinuities.

Limitations:

  • For symmetric (both-side) quantization, kernel recovery is not perfectly unbiased; for client–server or asymmetric architectures, theoretical exactness is attainable, with error bounds governed by feature complexity (Schellekens et al., 2020).
  • Base-Q tensorized QFFs require suitable factorization of mode sizes, and optimization over TNs is nonconvex and sensitive to hyperparameter selection (Wesel et al., 2023).
  • Some variants, such as LM2^2–RFF, offer improved performance only in certain estimator regimes.

Prospects and Open Directions:

  • Adaptive or learned binning, nonuniform quantization, and frequency learning may further enhance QFF performance, especially for neural field and high-dimensional modeling tasks (Lee et al., 2022).
  • Extension to quantized polynomial features and hybrid deep architectures is compelling for computational and storage efficiency at scale.
  • Theoretical understanding of approximation error as a function of bin count and feature channel dimension remains a subject for future work.

Quantized Fourier Features thus represent a mature, versatile technology for scalable kernel approximation, efficient neural field modeling, and expressive tensor network construction, enabling practical, resource-efficient implementations without substantial loss of fidelity (Li et al., 2021, Zhang et al., 2021, Schellekens et al., 2020, Lee et al., 2022, Wesel et al., 2023).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Quantized Fourier Features (QFF).