Papers
Topics
Authors
Recent
Search
2000 character limit reached

Inexact LSQR-based Variant Methods

Updated 9 January 2026
  • The topic introduces inexact LSQR-based variants that approximate least squares subproblems to efficiently solve large-scale optimization and inverse problems.
  • It details structured outer iterations combined with inexact inner LSQR solves, ensuring controlled error tolerances and convergent performance.
  • It highlights applications in AVE, variable projection, and hybrid LSMR, demonstrating significant speed-ups and scalability in practice.

An inexact LSQR-based variant refers to a class of iterative algorithms for solving large-scale linear or nonlinear problems in which inner subproblems—most commonly least squares or regularized least squares—are solved approximately via the LSQR method, subject to a principled tolerance, rather than solved exactly. This paradigm is increasingly ubiquitous in modern numerical optimization and inverse problems where large system dimensions prohibit direct solvers or exact Krylov convergence within each outer iteration. Key domains of application include absolute value equations (AVE), separable nonlinear inverse problems, and large-scale general-form regularization. The archetypal structure features an outer iteration (Douglas-Rachford splitting, variable projection, hybrid Krylov regularization, or Gauss-Newton), coupled with inexact inner solves using LSQR and rigorously analyzed stopping criteria to guarantee global or local convergence and solution accuracy.

1. Reformulation Methodologies and Problem Classes

The inexact LSQR-based framework is principally motivated by constrained or regularized problems where the unknown enters linearly (potentially along with a small number of nonlinear nuisance parameters), and where the subproblem-to-be-solved at each outer step reduces to a (regularized) least squares problem:

  • Absolute Value Equations (AVE): The equation Axxb=0Ax - |x| - b = 0 is reformulated as a Generalized Linear Complementarity Problem (GLCP) by introducing mappings Q(x)=Ax+xbQ(x)=A x + x - b and F(x)=AxxbF(x)=A x - x - b, leading to

Q(x)0,F(x)0,Q(x)F(x).Q(x) \ge 0, \quad F(x) \ge 0, \quad Q(x) \perp F(x).

A residual mapping e(x)=Axxbe(x) = Ax - |x| - b enables the application of Douglas-Rachford splitting techniques (Chen et al., 2021).

  • Separable Nonlinear Inverse Problems: Given b=A(y)x+εb = A(y) x + \varepsilon with xx linear and yy nonlinear, the problem is recast via variable projection (VarPro), eliminating xx by solving, for each yy, a Tikhonov regularized least squares subproblem which—when large—demands an iterative solver such as LSQR (Español et al., 2024).
  • General-Form Regularization: Hybrid LSMR algorithms address minxAxb2+λ2Lx2\min_x \|Ax-b\|^2 + \lambda^2 \|Lx\|^2 by iterative Krylov subspace projection, with the general-form constrained inner step solved by LSQR applied to L(IQkQkT)zLxkL (I-Q_k Q_k^T) z \approx Lx_k (Yang, 2024).

In each context, replacing direct solution of the inner linear systems by LSQR enables the method to scale to problems where AA or its regularized forms are prohibitively large, provided the inner inexactness is rigorously controlled.

2. Algorithmic Structure and Inexact LSQR Integration

The common pattern across these algorithms is a two-level iteration:

  • Outer Iteration: Advancing either a fixed-point, Gauss-Newton, or Krylov subspace process—e.g., updating xk+1x^{k+1} in DR splitting, y(k+1)y^{(k+1)} in variable projection, or kk in hybrid LSMR.
  • Inexact Inner LSQR Step: Each outer iteration involves an approximate solve of a least squares subproblem:
    • For DRs: minx2Axdk\min_x \|2A x - d^k\| to reach 2Axk+1dkαkek\|2A x^{k+1} - d^k\| \le \alpha_k \|e_k\|.
    • For GenVarPro: minx[A(y);λL]x[b;0]22\min_x \|[A(y); \lambda L] x - [b; 0]\|_2^2, with relative residual below a sequence ε(k)\varepsilon^{(k)} decaying geometrically (Español et al., 2024).
    • For Hybrid LSMR: minzL(IQkQkT)zLxk\min_z \|L(I-Q_k Q_k^T) z - Lx_k\|, with LSQR residual at most 106Lxk10^{-6}\|Lx_k\| per step (Yang, 2024).

The LSQR tolerance is chosen to balance outer-iteration progress against per-iteration computational cost, informed by conditioning of the respective subproblems and the overall convergence theory.

3. Convergence Theory and Stopping Criteria

Robust global or local convergence of the outer algorithm is conditioned on the choice of inexactness tolerance within the LSQR step:

  • Error Control: Explicit a posteriori bounds relate the LSQR residual to the forward solution error, e.g.,

xxˉCκ22(M)1εκ2(M)ε,\|x - \bar{x}\| \le C \frac{\kappa_2^2(M)}{1-\varepsilon \kappa_2(M)} \varepsilon,

with MM the inner least squares matrix and ε\varepsilon the LSQR tolerance (Español et al., 2024, Yang, 2024).

  • Global Linear Convergence: For inexact DR splitting on AVEs, if the residual mapping ee is solved to tolerance αke(xk)\alpha_k \|e(x^k)\| with αk\alpha_k below a specified threshold, each outer step contracts the error quadratically in the GG-norm, and the method converges globally and linearly under A11\|A^{-1}\|\le 1 (Chen et al., 2021).
  • Outer–Inner Tolerance Coupling: In variable projection, if ε(k)\varepsilon^{(k)} decays geometrically, the method exhibits geometric convergence in the nonlinear variables yy, matching the Gauss-Newton rate (e.g., y(k)y22k\|y^{(k)}-y^*\|_2\le2^{-k}) (Español et al., 2024). In hybrid LSMR, the accuracy of the inexact solution is preserved provided LSQR is run to a fixed tolerance and the condition number of the projected operator decreases with outer iterations (Yang, 2024).

These results collectively ensure that inexactness in the inner LSQR solve does not pollute the convergence behavior of the overall method, provided the LSQR stopping rule is appropriately defined.

4. Computational Complexity and Efficiency

The primary computational advantage arises from the low cost of LSQR applied to large, typically sparse, linear systems:

  • Per-Iteration Costs:
    • Outer iteration: Dominated by matrix-vector products (O(nnz(A))O(\mathrm{nnz}(A))) or sparse mat-vecs in AA and LL; minimal if GG is diagonal (Chen et al., 2021).
    • Inner LSQR: Cost proportional to the number of Krylov steps pkp_k, each requiring application of AA (and ATA^T) or LL (and LTL^T), with pkp_k often pknp_k\ll n.
    • Avoidance of matrix factorizations, dense projections such as L(IQkQkT)L(I-Q_k Q_k^T), and over-relaxation parameter tuning.
  • Conditioning Effects: In hybrid LSMR, the conditioning of the LSQR subproblem improves monotonically with the subspace dimension kk, expediting convergence as the outer loop progresses (Yang, 2024).
  • Comparison to Existing Methods:
    • In AVE, inexact LSQR-DRs is considerably faster and more robust than both exact and inexact Newton-type and SOR-like solvers for large sparse problems; it remains effective in the "hard case" A1=1\|A^{-1}\|=1 where competitors typically fail (Chen et al., 2021).
    • In separable inverse problems, exponential decay of the LSQR tolerance achieves final accuracy and convergence indistinguishable from full accuracy variable projection, at a reduced computational expense (Español et al., 2024).
    • For general-form regularization, accuracy of the regularized solution matches JBDQR (joint bidiagonalization) but at a fraction of the computational cost—often yielding speed-ups of 4×4\times to 22×22\times depending on the problem (Yang, 2024).

5. Representative Algorithms and Pseudocode

The following table summarizes the high-level structure of three representative inexact LSQR-based algorithms:

Algorithm Outer Iteration Inner LSQR Problem Inner Stopping Criterion
LSQR-DRs (AVE) (Chen et al., 2021) DR Splitting on xx minx2Axdk\min_x \|2A x - d^k\| Θk(x)αkek\|\Theta_k(x)\| \le \alpha_k \|e_k\|
Inexact-GenVarPro (Español et al., 2024) Gauss-Newton on yy minx[A(y);λL]x[b;0]2\min_x \|[A(y);\lambda L]x-[b;0]\|_2 [A;λL]Trr[A;λL]<ε(k)\frac{\|[A;\lambda L]^T r\|}{\|r\|\|[A;\lambda L]\|}<\varepsilon^{(k)}
Hybrid-LSMR (Yang, 2024) LSMR Krylov on xx minzL(IQkQkT)zLxk\min_z \|L(I-Q_k Q_k^T) z - Lx_k\| rjεtolLxk\|r_j\|\le\varepsilon_\mathrm{tol}\|Lx_k\|

Pseudocode for each algorithm is fully detailed in the corresponding references; all feature matrix-vector product-centric implementations and enforce their specific inexactness requirements.

6. Numerical Experiments and Practical Implications

Empirical results across multiple problem domains confirm the theoretical advantages of inexact LSQR-based variants:

  • Performance Metrics: Relative reconstruction error (RRE), number of outer and inner iterations, and total wall-time or CPU usage are used as primary evaluation criteria.
  • Robustness: When exact solvers break down (e.g., in AVEs with A1=1\|A^{-1}\|=1), inexact LSQR-based variants consistently converge when solutions exist, and exhibit divergence as a reliable certificate of inconsistency.
  • Parameter Tuning: The absence of over-relaxation or damping parameters (compared to SOR or Newton) simplifies tuning. Geometric decay of LSQR tolerances is shown to be more effective than fixed tolerances in attaining optimal convergence rates (Español et al., 2024).
  • Cross-application Synergy: Improved conditioning in the inner Krylov subproblems as the outer loop evolves is a generic observation across these algorithms, substantiating the efficiency of the inexact approach (Yang, 2024).

7. Impact and Theoretical Extensions

Adoption of inexact LSQR-based variants has broadened the tractable problem scale in inverse problems, optimization, and computational regularization. The following themes emerge:

  • Scalability: Application to large-scale and sparse systems, enabled by the matrix-vector product structure of LSQR and avoidance of direct solvers.
  • Rigorous Inexactness Analysis: Explicit connections between LSQR tolerances, condition numbers, and achievable accuracy ensure method reliability even under substantial inexactness.
  • Generality: Underlying techniques apply not only to linear systems, but extend naturally to nonlinear, constrained, and regularized problems via appropriate reformulations and tailored splitting/variable elimination strategies.

A plausible implication is that further unification of these frameworks—especially in the context of nonconvex or nonsmooth problems—could drive developments in both algorithmic theory and high-performance computational methods.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Inexact LSQR-based Variant.