Dynamic Two-Layer MLPs
- Dynamic two-layer MLPs are neural networks with smooth activations where gradient updates concentrate in a fixed low-dimensional subspace established at initialization.
- This emergent subspace behavior supports low-rank training methods that reduce memory and compute demands while preserving model performance.
- Theoretical analysis and empirical validation highlight that proper initialization and smooth nonlinearity are crucial for maintaining subspace invariance during training.
Dynamic two-layer multilayer perceptrons (MLPs) with smooth activations exhibit an emergent phenomenon whereby gradient-based training drives almost all weight changes within a fixed low-dimensional subspace determined at initialization. This behavior, observed under both full-batch and stochastic training regimes, underlies the success of low-rank training, compression, and adaptation methods and can be exploited via explicit architectural parameterizations to yield substantial reductions in both memory and compute cost while achieving performance parity with fully-parameterized models (Xu et al., 5 Feb 2026).
1. Mathematical Formulation and Model Setup
Consider a two-layer MLP of the form
where denotes whitened input data (), the target labels (), the learned first-layer weights, and a fixed, full-row-rank second-layer matrix. The entrywise nonlinearity is assumed smooth (). The loss function is
and training proceeds by gradient descent (GD) or its stochastic variants on , with updates
2. Training Dynamics and Subspace Invariance
Backpropagation yields the gradients
so
A key finding is that GD updates to concentrate in a fixed $2K$-dimensional subspace determined by the initialization and the initial gradient structure. Explicitly, let the SVD of the initial gradient be , with the top- subspaces specified by and . The complement, denoted , has dimension and admits bases and for input and output spaces, with
The magnitude of the projected gradients and updates within remains uniformly small throughout training: Thus, the dynamics of are confined to the orthogonal active subspace orthogonal to . Perturbation-theoretic arguments (e.g., Wedin’s sin Θ theorem) ensure that this subspace drifts only minimally during training.
3. Theoretical Conditions and Guarantees for Low-Rank Dynamics
Several conditions are required to guarantee the emergence and invariance of the low-dimensional subspace:
- Input normalization: inputs must be whitened (), with .
- Smooth nonlinearities: , .
- Initialization: first-layer weights are small and semi-orthogonal (), with suitably bounded.
- Learning rate: step size is at most .
Under these hypotheses:
- The initial gradient is approximately rank-.
- The singular subspaces of associated with large singular values change at most exponentially slowly ().
- Projected updates into are initially and decay as throughout training.
4. Low-Rank Parameterization: Construction and Initialization
These findings motivate an explicit low-rank reparameterization of the two-layer MLP. If , are orthonormal bases of the “active” $2K$-dimensional subspace, then
where contains the only learned parameters at this layer. This parameterization can be generalized to each intermediate layer of deeper MLPs.
The construction of leverages the initial gradient: a backward pass at identifies the top- subspaces of ; the orthogonal complement to is designated as , and is set via . Proper initialization within this subspace (“”) is crucial; random subspace initialization leads to training failure.
5. Empirical Validation Across Architectures and Tasks
Extensive empirical results support the theoretical framework:
- Synthetic two-layer networks (): with smooth nonlinearities (ELU, GELU, SiLU), the orthogonal complement subspaces ( dimensions) experience minimal drift (sub-degree rotation) and near-constant singular values after thousands of GD steps. In contrast, non-smooth activations (ReLU variants) lead to significant subspace and singular value instability.
- Deeper MLPs (): identical concentration of intermediate-layer weight changes to the active subspace.
- Optimization variants: the phenomenon persists under minibatch SGD and Adam, unwhitened data, and cross-entropy loss.
- Low-rank MLP on Fashion-MNIST (): low-rank MLP () initialized via the prescribed method matches both test-loss and accuracy of the full-width model over 1500 epochs, whereas random projection initialization fails to converge.
- VGG-16 head on CIFAR-10: with the convolutional backbone frozen, a low-rank head () matches full-head performance within ±0.5% accuracy for full fine-tuning; for classifier-only tuning, gap narrows to ∼2% when doubling to $4K$.
6. Architectural and Practical Implications
This subspace-concentration phenomenon enables practical architectural modifications:
- Memory and compute reduction: wrapping each layer with low-rank factors about the “dead” directions reduces both resource requirements and parameter counts without accuracy loss, provided initialization is performed properly.
- Compatibility with deep architectures: insertions of low-rank wrappers generalize to multi-layer MLP settings, with all intermediate weight changes remaining concentrated as in the two-layer case.
- Fine-tuning and adaptation: provides theoretical explanation for empirical successes of low-rank fine-tuning techniques (e.g., LoRA), where restricting optimization to a small, precomputed subspace is sufficient for high performance.
7. Open Problems and Research Directions
Several open theoretical and practical questions are prompted by these results:
- The mechanism by which smooth activations stabilize the subspace, as opposed to non-smooth options (e.g., ReLU), warrants further characterization.
- Relaxations of input whitening or small-initialization assumptions could broaden applicability.
- Effects of additional forms of stochasticity (dropout, quantization, SGD noise) on subspace invariance remain to be systematically studied.
- Connections to phenomena such as neural collapse, feature learning dynamics, and implicit bias in deep learning are not fully elucidated.
- Extensions to architectures beyond MLPs—including convolutional networks, residual blocks, transformers—as well as online (incremental) subspace tracking methods are promising avenues for future research.
In summary, the analysis of dynamic two-layer MLPs with smooth nonlinearities reveals that the effective learning dynamics are confined to a sharply delimited, initialization-determined subspace. This behavior is preserved under typical training regimes and can be operationalized via low-rank parameterizations to achieve efficient, high-performing models across tasks and architectures (Xu et al., 5 Feb 2026).