Diagonal Linear Networks (DLNs) Analysis
- Diagonal Linear Networks are neural architectures with linear activations and strictly diagonal parameter matrices that use coordinate-wise Hadamard multiplication for nonconvex parameterization.
- Their gradient dynamics reveal mirror flows and implicit regularization effects that interpolate between ℓ1 and ℓ2 norms, significantly influencing convergence and sparsity.
- Practical implications include applications in sparse recovery, incremental learning, and linear programming, with extensions to structured RNNs and high-dimensional optimization analyses.
Diagonal Linear Networks (DLNs) are a class of neural architectures characterized by linear activations and strictly diagonal parameter matrices in each layer. These networks generalize the standard linear model via a nonconvex, coordinate-wise parameterization—most commonly as the Hadamard (elementwise) product of parameter vectors across layers. Despite their simplicity, DLNs have emerged as a canonical setting for analyzing implicit regularization phenomena, convergence properties, and optimization-induced selection mechanisms across a wide spectrum of learning theory and optimization research. Their tractable yet expressive form enables sharp characterization of gradient dynamics, connections with - and -minimization, and a unified view on initialization dependence, incremental learning, and algorithmic bias.
1. Architecture and Parameterization
A -layer Diagonal Linear Network represents the predictor as
where each and denotes componentwise (Hadamard) multiplication. The network output for input is , with square loss or other convex objectives as the training criterion. When , the parameterization is , with .
More general parameterizations include sign-decompositions (e.g., as in (Pesme et al., 2021)), or powers/depth with homogeneity exponent (e.g., (Wind et al., 2023)). The parameterization induces a nonconvex landscape over despite the final linearity in .
In DLN-based linear programming approaches, constraints or objectives may further be absorbed into the diagonally-structured reparameterization (e.g., for (Wang et al., 2023)). Diagonal RNNs employ the recurrence with diagonal , and can be extended to FA architectures via fixed-point iteration (Movahedi et al., 13 Mar 2025).
2. Gradient Dynamics and Optimization-Induced Bias
Mirror Flow and Bregman Potentials
Continuous-time limit analysis of DLN training reveals that the effective predictor follows a mirror flow under a convex potential determined by the parameterization and initialization:
where is the loss (Labarrière et al., 2024, Papazov et al., 2024). For , the potential admits the "hyperbolic entropy" form:
with set by the initial parameters. Gradient flow thus selects, among all interpolators (e.g., ), the one minimizing —leading to an implicit, data-independent regularization (Labarrière et al., 2024, Papazov et al., 2024).
Depth, Initialization, and Bias Regimes
The implicit regularizer interpolates between:
- -norm for vanishing initialization ("rich" regime): strong sparsity bias.
- -norm or minimum norm for large initialization ("lazy" or NTK regime).
Network depth further modulates bias: deep DLNs (homogeneity ) select among -minimizers the one with large -norm (hence more spread), while shallow () select the maximum-entropy solution (Wind et al., 2023). Calibration of initialization to a target allows controlled interpolation between extreme regimes (Zhang et al., 25 Sep 2025).
3. Explicit Characterization of Training Trajectories and Regularization Path
Connection with Lasso Path
Full gradient-flow DLN solutions (with infinitesimal initialization) converge to the minimum- interpolator (Berthier, 23 Sep 2025). Remarkably, the entire time-averaged DLN training trajectory retraces the Lasso (-regularized least squares) solution path as a function of an effective regularization parameter determined by the training time:
Under a monotonicity condition on the regularization path, this correspondence is exact. Early stopping in DLN training thus acts as an implicit regularization, with the effective penalty controlled by training time (Berthier, 23 Sep 2025, Berthier, 2022).
Saddle-to-Saddle Dynamics and Incremental Learning
Vanishing-initialization DLN gradient flow exhibits "saddle-to-saddle" dynamics, sequentially jumping between faces of the loss constrained to active coordinate sets—mirroring the LARS algorithm for Lasso homotopy (Pesme et al., 2023). Each jump corresponds to incrementally adding features to the active support (Berthier, 2022). In overparameterized regimes, this process continues until attaining the unique minimum- solution; in underparameterized/anti-correlated settings, support grows monotonically over time (Berthier, 2022).
4. Algorithmic Effects: Stochasticity, Momentum, and Sharpness-Aware Perturbations
Stochastic Gradient Dynamics
Stochastic gradient descent (SGD) amplifies the implicit bias relative to gradient flow (GF), due to an effectively reduced initialization scale (Pesme et al., 2021, Even et al., 2023). The degree of this amplification is inversely related to the convergence rate; slower training increases the bias towards sparser solutions. In the "edge of stability" (large stepsize), SGD retains homogeneous bias, favoring recovery of support, whereas GD develops "heterogeneous" weights penalizing the true support, leading to failure in sparse recovery (Even et al., 2023). Experimental evidence confirms improved validation loss and recovery for SGD in sparse regression regimes (Pesme et al., 2021, Even et al., 2023).
Momentum and Intrinsic Acceleration Parameter
Momentum gradient descent with step size and parameter is governed in the continuous-time limit by , which uniquely determines the optimization trajectory modulo time scaling (Papazov et al., 2024). For small , the final solution exhibits lower balancedness and stronger sparsity (i.e., smaller -norm) than simple gradient flow. Tuning thus enables control of both speed of convergence and implicit regularization strength (Papazov et al., 2024).
Sharpness-Aware and Noisy Perturbations
Stochastic Sharpness-Aware Minimization (S-SAM) introduces isotropic Gaussian noise into DLN weights at each step, yielding a regularizer equal to the average sharpness of the loss landscape (Clara et al., 14 Mar 2025). This imposes "balancing" across the diagonal factors—minimizing both PAC–Bayes average sharpness and the Hessian trace—and drives the iterates toward a soft-thresholded shrinkage of the true parameter. The shrinkage factor depends polynomially on the noise, with deeper networks and larger batch noise accelerating convergence to balanced, low-sharpness regions (Clara et al., 14 Mar 2025).
5. Extensions: Linear Programming, RNNs, and Structured Optimization
DLNs have been adapted as solvers for linear programming and basis pursuit via gradient descent on a nonconvex parameterization of the feasible set (e.g., for positivity) (Wang et al., 2023). Gradient descent over DLNs is shown to converge linearly (in iteration count) to the entropically regularized LP solution; initialization controls the entropy penalty. Applications include optimal transport (via Sinkhorn-like initialization) and -basis pursuit (Wang et al., 2023).
Diagonal linear RNNs—employing channel-wise recurrence—allow efficient and parallelizable fixed-point iterations that can universally approximate the expressive power of dense RNNs using low-rank channel mixers, provided sufficient depth/iterations (Movahedi et al., 13 Mar 2025).
A general IRLS–DLN connection has also been established: alternating reweighting and least squares on DLN parameterizations unifies IRLS, lin-RFM, and AM variants, with asymptotic risk and support recovery precisely characterized in the high-dimensional Gaussian design using DMFT (Kaushik et al., 2024, Nishiyama et al., 2 Oct 2025).
6. High-Dimensional Theory, Scaling, and Mean-Field Limits
Dynamical Mean-Field Theory (DMFT) permits reduction of high-dimensional DLN gradient-flow to low-dimensional effective stochastic processes, capturing the full loss dynamics, generalization bias, and the speed/generality trade-off (Nishiyama et al., 2 Oct 2025). The convergence regime splits sharply between large and small initialization—corresponding to "lazy"/kernel and "rich"/feature-learning phases, with explicit timescale separation. Reduced initialization improves generalization (sparsity) but slows convergence.
Scaling laws for the full family of -norms of DLN solutions under bias have been rigorously characterized (Zhang et al., 25 Sep 2025). Elbow points () and universal norm thresholds () delineate which norms plateau and which grow, matching explicit minimum- interpolation in the overparameterized limit. Initialization scaling permits practitioners to tune the effective bias along the spectrum and select stable norm-based generalization metrics (Zhang et al., 25 Sep 2025).
7. Practical Implications and Research Significance
Diagonal Linear Networks provide a tractable testbed for uncovering mechanisms of implicit regularization, optimization geometry, and algorithmic parameterization effects in overparameterized settings. Their analysis precisely predicts phenomena such as incremental learning, early stopping regularization, sharpness minimization, and the impact of stochasticity/momentum on the regularization path. The insights into layer-wise balancing, shrinkage-thresholding, and sharpness-guided optimization contribute directly to the principled design of modern deep learning algorithms—clarifying the implicit effect of standard practices like momentum, step size tuning, random noise, and reparameterization on model generalization and capacity control.
The rigorous equivalence between DLN training dynamics and desirable convex regularization paths (notably with and entropic-regularized objectives), and the proven correspondence with basis pursuit, optimal transport, and IRLS performance, situates DLNs at the intersection of statistical learning theory, convex optimization, and algorithmic deep learning research (Berthier, 23 Sep 2025, Wang et al., 2023, Kaushik et al., 2024). This suggests that further exploration of diagonalizable architectures and their optimization geometry will continue to yield transferable understanding for much richer classes of neural and structured models.