Approximation Accuracy of the Krylov Subspaces for Linear Discrete Ill-Posed Problems
Abstract: For the large-scale linear discrete ill-posed problem $\min|Ax-b|$ or $Ax=b$ with $b$ contaminated by Gaussian white noise, the Lanczos bidiagonalization based Krylov solver LSQR and its mathematically equivalent CGLS, the Conjugate Gradient (CG) method implicitly applied to $ATAx=ATb$, are most commonly used, and CGME, the CG method applied to $\min|AATy-b|$ or $AATy=b$ with $x=ATy$, and LSMR, which is equivalent to the minimal residual (MINRES) method applied to $ATAx=ATb$, have also been choices. These methods exhibit typical semi-convergence feature, and the iteration number $k$ plays the role of the regularization parameter. However, there has been no definitive answer to the long-standing fundamental question: {\em Can LSQR and CGLS find 2-norm filtering best possible regularized solutions}? The same question is for CGME and LSMR too. At iteration $k$, LSQR, CGME and LSMR compute {\em different} iterates from the {\em same} $k$ dimensional Krylov subspace. A first and fundamental step towards to answering the above question is to {\em accurately} estimate the accuracy of the underlying $k$ dimensional Krylov subspace approximating the $k$ dimensional dominant right singular subspace of $A$. Assuming that the singular values of $A$ are simple, we present a general $\sin\Theta$ theorem for the 2-norm distances between these two subspaces and derive accurate estimates on them for severely, moderately and mildly ill-posed problems. We also establish some relationships between the smallest Ritz values and these distances. Numerical experiments justify the sharpness of our results.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.