Powerlaw Random Feature Model
- The model is a framework for high-dimensional random feature regression that employs power-law decay in feature spectra and target weights.
- It provides non-asymptotic, dimension-free risk formulas that delineate the trade-offs among sample complexity, model size, and regularization.
- The framework prescribes optimal learning rate schedules and training protocols for both ridge regression and SGD, validated by empirical studies.
The powerlaw random feature model is a framework for analyzing high-dimensional random feature regression schemes, where the spectrum of the feature covariance operator and the target function weights both exhibit power-law decay. This model enables rigorous characterization of generalization rates, test errors, and optimal training protocols for both ridge regression and stochastic gradient descent (SGD), offering non-asymptotic, dimension-free, and closed-form scaling laws. These analyses reveal precise phase diagrams for generalization, optimal trade-offs between sample complexity, model size, and regularization, as well as compute-optimal and training-optimal schedules under resource constraints (Defilippis et al., 2024, &&&1&&&).
1. Structure and Assumptions of the Powerlaw Random Feature Model
The model considers regression in a Hilbert feature space , either finite- or infinite-dimensional, with a feature-integral operator possessing eigenpairs . The eigenvalues of are assumed to decay as a power law,
(parameter is used in alternate notation). The regression target decomposes as
with coefficient decay and a source exponent .
For random feature regression, i.i.d. samples are drawn, with , and the random feature map is
The model is analyzed both for ridge regression with finite and also in the context of SGD, considering both learning rate schedules and batch-size protocols (Defilippis et al., 2024, Bordelon et al., 4 Feb 2026).
2. Deterministic Equivalent Test Error: Non-Asymptotic, Dimension-Free Risk Formulas
The excess risk for random feature ridge regression (RFRR) is given by
Under a concentration condition on the random features (Assumption 3.1), the risk admits a dimension-free deterministic equivalent:
where depends only on the feature spectrum , the target weights , regularization parameter , , and . The closed-form is:
- Solve for via
- Set
- Define
- Bias and variance contributions:
So . This deterministic equivalent is non-asymptotic (no large-sample assumption), multiplicative (relative error is controlled), and dimension-free (applicable regardless of the ambient or effective feature dimension) (Defilippis et al., 2024).
3. Sharp Scaling Laws and Minimax Rates Under Powerlaw Decay
When the power-law assumptions are imposed on both spectrum and target coefficients,
and setting , for , explicit scaling exponents for the risk are derived,
where
and
The overall risk exponent is .
The minimax-optimal (fastest) rate
is achieved by
implying the minimal number of random features to attain minimax rates is with regularization (Defilippis et al., 2024).
4. SGD Dynamics, Optimal Learning Rate Scheduling, and Training Phases
In SGD-based training of powerlaw random feature regression, the evolution of the mean-square error in each spectral coordinate is tracked, leading to a continuous-time optimal control formulation for both the learning rate and batch size . Two distinct regimes (phases) emerge:
- Easy phase (): The optimal learning rate schedule is a polynomial decay,
and the excess loss decays as .
- Hard phase (): The optimal schedule exhibits a warmup–stable–decay shape,
where , allocating most of the training to a fixed learning rate and a vanishing fraction to annealing. Here, . The optimal batch size similarly follows a schedule driven by the same variational principle (Bordelon et al., 4 Feb 2026).
These schedules outperform constant or simple power-law learning rate protocols, and the optimal exponents are not attainable by “anytime” policies that ignore training horizon (Bordelon et al., 4 Feb 2026).
5. Special Cases, Regularization, and Phase Transitions
The model encompasses several noteworthy limits and phase phenomena:
- Kernel regime (): The theory reduces to kernel ridge regression, with a univariate fixed-point for the variance parameter.
- Approximation-limit (): Risk is determined purely by the bias incurred due to model truncation.
- Interpolation cusp: At the critical point and , the risk diverges as , manifesting the “double-descent” phenomenon.
- Regularization trade-off: The parameter tunes the bias-variance balance precisely, with its optimal scaling () explicitly characterized.
- Minimax optimality: The model quantifies the minimal required number of features necessary for minimax generalization rates, often implying significant reduction in model size relative to .
This phase diagram, accessible through explicit formulas, extends classical results on kernel learning rates to the more general random feature context (Defilippis et al., 2024).
6. Compute-Optimal Scaling, Mini-batch Protocols, and Momentum Extensions
When model size () and training horizon () are optimized jointly for a fixed compute budget (), the theory predicts
- For : , ,
- For : , ,
For fixed sample budget, the generalization error scales as (easy) or (hard) as a function of the total number of samples processed.
Including time-varying momentum in optimization, further improvements are possible. In the easy phase, optimal only slightly affects constants, but in the hard phase, joint optimization yields strictly faster decay exponents than baseline SGD (Bordelon et al., 4 Feb 2026).
7. Practical Implications and Empirical Validation
The deterministic equivalents and resulting scaling laws directly inform the optimal selection of regularization parameter and random feature count for generalization, and prescribe precise learning rate and batch size schedules for SGD training. This dimension-free theory is empirically validated on a wide range of real and synthetic tasks, capturing phase transitions, risk minima, and interpolation artifacts observed in practice.
The analysis provides rigorous guarantees even in infinite-dimensional feature spaces, extending classical kernel learning results to model classes where random feature methods are employed. The theory reveals that with appropriate tuning—guided by the powerlaw decay exponents and explicit closed-form solutions—optimal generalization often requires far fewer random features than samples, and that sophisticated learning rate schedules and joint optimization of minibatch size and momentum can further enhance learning efficiency (Defilippis et al., 2024, Bordelon et al., 4 Feb 2026).