Reparameterized Low-Rank Updates
- Reparameterized low-rank updates are techniques that refine solutions to matrix equations by applying targeted low-rank corrections instead of full recomputation.
- They leverage rational and extended Krylov subspace methods to efficiently compress correction terms, ensuring rapid convergence and rigorous singular value decay guarantees.
- This approach underpins scalable solvers for Sylvester, Lyapunov, and Riccati equations, yielding significant speedups and memory savings in large-scale control and PDE applications.
Reparameterized Updates (Low-Rank)
A reparameterized low-rank update is a computational strategy in which the solution to a linear or nonlinear matrix equation—in particular, Sylvester, Lyapunov, and Riccati equations—is updated when the coefficients undergo structured low-rank changes. Rather than recomputing the full solution, the method targets only the correction, exploiting the low effective dimension induced by the perturbation. The approach leverages rational or extended block Krylov subspace methods for efficient large-scale computation, enables rigorous singular value decay guarantees, and underpins scalable algorithms for divide-and-conquer over hierarchical low-rank structures such as HODLR and HSS matrices. Reparameterized low-rank updating is now fundamental to large-scale control, PDE discretizations, and matrix function applications, yielding nearly linear time and memory complexity when appropriately embedded (Kressner et al., 2017).
1. Sylvester/Lyapunov Equation and Low-Rank Modeling
Consider the Sylvester equation
where , , and , with to guarantee unique solvability. The Lyapunov case is recovered for , , and Hurwitz .
Suppose a reference solution to is known. A low-rank perturbation modifies the problem: with each update possessing factored form, e.g., , , , where the ranks are much smaller than .
2. Correction Equation and Right-Hand Side Compression
Subtracting the old from the new Sylvester equation and collecting terms yields a correction equation for : where
with rank at most .
A skinny factorization with
enables efficient compressed low-rank representation. A thin SVD can further reduce this to an optimal (numerically) effective rank .
3. Rational and Extended Block Krylov Subspaces
To solve
with of small column dimension, tensorized block (rational/extended) Krylov subspaces are constructed:
- Left:
- Right:
After iterations, orthonormal bases , are formed (), yielding the approximation . The "compressed" Sylvester equation is small () and solved directly, with convergence monitored via the residual norm.
4. Algorithmic Workflow, Computational Complexity, and Storage
The procedure divides into two stages:
- Right-Hand Side Compression: Compute and compress factors on the updated right-hand side.
- EKSM Solution: Apply the extended Krylov subspace method to produce the low-rank correction .
High-level pseudocode:
- Input:
- Compute so , recompress as needed.
- Solve via block EKSM.
- Return the updated solution .
Complexity:
- For HODLR coefficients, a single EKSM invocation costs flops (or for HSS).
- Memory requirements: storing in ; HODLR structure needs .
5. Singular Value Decay and Stability Guarantees
Under mild spectral separation, the singular values of the update decay rapidly. For the correction equation
with numerical ranges , disjoint and compact, for any rational ,
where is the th Zolotarev number and decays exponentially with . For Hermitian positive definite with spectra in , this yields exponential decay of in , enabling -rank approximations with (Kressner et al., 2017).
6. Embedding in Newton-Kleinman Iteration for Riccati Equations
In continuous-time CARE: the Newton-Kleinman iteration solves a sequence of Lyapunov equations: If is low-rank, each right-hand side difference remains low-rank. The low-rank update machinery specifically computes as the solution with low-rank right-hand side, so subsequent Lyapunov solves become computationally cheap. Newton's quadratic convergence ensures only a few expensive solves are needed, the rest benefiting fully from the low-rank update formalism.
7. Empirical Benchmarks and Role in Divide-and-Conquer Solvers
Numerical experiments demonstrate:
- 5–10× speedup for large (up to ) 2D Poisson/Lyapunov equations, with residuals ≤ .
- Memory reductions: HODLR from ≈0.5 GB to HSS ≈0.27 GB.
- For convection–diffusion and heat equations, 10× memory savings when compared to sparse-CG.
- For CAREs with banded + low-rank , 3–5× speedup relative to plain Newton+EKSM.
Ranks remain small, often , and residuals are driven to –. Performance gains are pronounced when many related Sylvester/Lyapunov (or CARE) solves share similarities apart from low-rank perturbations.
The methodology serves as the foundation for divide-and-conquer algorithms over hierarchical low-rank formats (HODLR, HSS), achieving nearly linear complexity in by recursively splitting and updating only the low-rank couplings (Kressner et al., 2017).
In summary, the reparameterized low-rank update approach for large matrix equations enables efficient, stable incremental solution updating under low-rank data changes. It achieves near-optimal storage and computational complexity, underpins fast solvers for hierarchical matrix structures, and integrates naturally into Newton-type methods for more complex nonlinear matrix equations. This framework is a core algorithmic building block for large-scale linear algebra in scientific computing and control (Kressner et al., 2017).