Papers
Topics
Authors
Recent
Search
2000 character limit reached

Adaptive Regularization Parameter Choice Rules for Large-Scale Problems

Published 12 Jul 2019 in math.NA and cs.NA | (1907.05666v1)

Abstract: This paper derives a new class of adaptive regularization parameter choice strategies that can be effectively and efficiently applied when regularizing large-scale linear inverse problems by combining standard Tikhonov regularization and projection onto Krylov subspaces of increasing dimension (computed by the Golub-Kahan bidiagonalization algorithm). The success of this regularization approach heavily depends on the accurate tuning of two parameters (namely, the Tikhonov parameter and the dimension of the projection subspace): these are simultaneously set using new strategies that can be regarded as special instances of bilevel optimization methods, which are solved by using a new paradigm that interlaces the iterations performed to project the Tikhonov problem (lower-level problem) with those performed to apply a given parameter choice rule (higher-level problem). The discrepancy principle, the GCV, the quasi-optimality criterion, and Regi\'{n}ska criterion can all be adapted to work in this framework. The links between Gauss quadrature and Golub-Kahan bidiagonalization are exploited to prove convergence results for the discrepancy principle, and to give insight into the behavior of the other considered regularization parameter choice rules. Several numerical tests modeling inverse problems in imaging show that the new parameter choice strategies lead to regularization methods that are reliable, and intrinsically simpler and cheaper than other strategies already available in the literature.

Citations (1)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.