Inertial Krasnoselskii-Mann Method
- The Inertial Krasnoselskii-Mann method is a fixed-point iteration enhanced with a momentum term, used to accelerate convergence in Hilbert space optimization.
- It combines inertial extrapolation with relaxation, achieving weak, strong, and linear convergence under carefully selected parameter conditions.
- The method underpins advanced operator splitting schemes in imaging, signal processing, and large-scale variational problems, offering robustness against computational errors.
The Inertial Krasnoselskii-Mann (IKM) method generalizes the classical Krasnoselskii-Mann (KM) fixed-point iteration by incorporating a momentum (inertia) term in addition to relaxation. This two-step approach has become a central construct in modern optimization and monotone operator theory, enabling the acceleration of fixed-point solving and monotone splitting algorithms for nonexpansive and quasi-nonexpansive operators on Hilbert spaces. IKM’s convergence theory includes weak, strong, and linear convergence in distinct regimes, robustifies iteration under computational or problem-driven perturbations, and underpins state-of-the-art splitting schemes in imaging, optimization, and large-scale variational problems.
1. Formal Description of the Inertial Krasnoselskii-Mann Iteration
The IKM iteration generalizes the classical KM update by adding an explicit inertial extrapolation:
- Hilbert space setting: let be real Hilbert space, a (quasi-)nonexpansive operator, with .
- Parameters: Extrapolation (inertia) and relaxation .
- Initialization: .
The basic IKM step is, for ,
Alternatively: This framework covers both time-step and parameter-variant designs, coordinate/parallel extensions, adaptive and random updates, and variants with computational errors or anchoring terms (Maulén et al., 2022, Bot et al., 2014, Cui et al., 2019, Combettes et al., 2017, Wen et al., 2016).
2. Convergence Theory and Parameter Conditions
The convergence of the IKM method hinges on operator regularity and parameter selection. Two principal settings are quasi-nonexpansive and quasi-contractive maps.
2.1. Weak Convergence for Quasi-Nonexpansive Operators
- If are quasi-nonexpansive ( for all ) and parameters satisfy growth and compatibility restrictions (e.g., nondecreasing , infimum of bounded away from zero, control on inertia and error terms):
- Nonasymptotic residual rate: , with for fixed-point residuals (Maulén et al., 2022, Cui et al., 2019, Combettes et al., 2017).
2.2. Strong and Linear Convergence
- For quasi-contractive (, $0 < q < 1$) and appropriate sequences, IKM achieves:
- Strong convergence if
- Linear convergence rate: for an explicit
- This matches or improves the classical non-inertial regime under the same parameter window (Maulén et al., 2022).
2.3. Robustness and Perturbation Resilience
- Allowance of summable error sequences in the iterative step (the inexact IKM) preserves weak convergence, provided , and inertia is controlled in sum-of-squares or monotonicity (Cui et al., 2019).
2.4. Strong Convergence with Anchoring (Halpern/Vicosity Enhancements)
- Embedding an external anchor (Halpern iteration) or a contraction (viscosity term) enables strong convergence to the metric projection or the unique viscosity solution, at the expense of step size decays and auxiliary conditions (Tan et al., 2020, Boţ et al., 2024).
3. Algorithmic Variants and Notable Designs
IKM serves as the modular core for many acceleration and splitting schemes:
| Variant / Paper | Update / Momentum | Notable Features / Context |
|---|---|---|
| Classical IKM (Maulén et al., 2022, Bot et al., 2014) | , , single operator | Weak/strong/linear convergence |
| Stochastic IKM (Wen et al., 2016) | Coord.-wise random , inertia | a.s. convergence, block updates |
| Chebyshev-inertial (Wadayama et al., 2020) | Periodic (roots of Chebyshev poly) | Locally optimal linear rate |
| Adaptive-momentum (He et al., 28 Oct 2025) | Data-driven from , geometry | successively |
| Fast KM (Nesterov-style) (Bot et al., 2022) | Decaying , relaxed step | rate, weak convergence |
| Generalized fast w/ precond. (Boţ et al., 2024) | Preconditioning, ODE-origin, 2-parameter , anchor | Unified continuous/discrete rates, degenerate metrics |
Each variant modulates the momentum step, relaxation/anchoring, selection (full, random, or adaptive), and application of errors or preconditioning, according to problem structure and performance tradeoffs. Theoretical analysis often proceeds via Lyapunov/Energy functionals and generalized Fejér monotonicity.
4. Applications and Numerical Performance
IKM schemes are widely applied to monotone inclusion and convex optimization:
- Operator splitting: Inertial Douglas-Rachford, three-operator splitting, primal-dual splitting, often yielding improved iteration/CPU performance over classical schemes (Bot et al., 2014, 1904.11684, Maulén et al., 2022).
- Signal/image processing: Total-variation denoising, inpainting, and matrix completion, with 12–60% reductions in practical runtime and iteration counts when inertia is employed (Maulén et al., 2022, He et al., 28 Oct 2025, 1904.11684).
- Large-scale optimization: Stochastic coordinate-descent variants accommodate high-dimensional settings, preserving (almost sure) convergence (Wen et al., 2016).
- Optimization geometric problems: Beckmann optimal transport, geometric medians, clustering—accelerated variants provide to convergence for primal-dual and splitting schemes (Boţ et al., 2024, Bot et al., 2014).
5. Parameter Selection and Practical Guidelines
- Inertial weight : Typically , sometimes scheduled to increase slowly or tuned adaptively; excessive inertia can degrade worst-case residual bounds but may accelerate practical convergence; best selected with geometric insight or variance control (Maulén et al., 2022, He et al., 28 Oct 2025).
- Relaxation : Fixed or diminishing; in acceleration settings (e.g., Fast KM) and decaying stepsizes deliver the rate (Bot et al., 2022, Boţ et al., 2024).
- Anchoring/halpern parameter: with ensures the entire sequence is forced toward the solution as in strong-convergence variants (Tan et al., 2020, Boţ et al., 2024).
- Perturbations: Summability or -control of errors is sufficient for weak convergence; precise worst-case rates available under further parameter decays (Cui et al., 2019).
- Preconditioning: Strong or degenerate possible, and differences measured in corresponding seminorms; critical in splitting and metric-geometry problems (Boţ et al., 2024).
6. Broader Impact, Limitations, and Generalizations
IKM generalizes standard projection, relaxation, and momentum methods in a unifying two-step framework. Its design unifies Polyak/Heavy-ball, Nesterov acceleration, and Halpern/anchoring into a parameter-rich algorithmic family. While inertia can increase the practical speed of convergence, it may cause oscillatory behavior or slow the worst-case theoretical rates; achieving optimal acceleration remains context-dependent, and parameter selection often requires problem-specific tuning or a posteriori adaptation. Open questions include deriving optimal global step schedules and developing theories for nonmonotone/inexact and nonconvex regimes (Maulén et al., 2022, He et al., 28 Oct 2025, Boţ et al., 2024).
The IKM paradigm is foundational in modern operator theory and optimization, with variations underpinning the fastest first-order methods for large-scale and structured problems across signal processing, computational imaging, and data science.
References
- (Maulén et al., 2022) Inertial Krasnoselskii-Mann Iterations
- (Bot et al., 2014) Inertial Douglas-Rachford splitting for monotone inclusion problems
- (1904.11684) An inertial three-operator splitting algorithm with applications to image inpainting
- (Bot et al., 2022) Fast Krasnosel'skii-Mann algorithm with a convergence rate of the fixed point iteration of
- (Wadayama et al., 2020) Chebyshev Inertial Iteration for Accelerating Fixed-Point Iterations
- (Boţ et al., 2024) Generalized Fast Krasnoselskii-Mann Method with Preconditioners
- (Cui et al., 2019) Convergence analysis of an inexact inertial Krasnoselskii-Mann algorithm with applications
- (Tan et al., 2020) Strong convergence of modified inertial Mann algorithms for nonexpansive mappings
- (Combettes et al., 2017) Quasinonexpansive Iterations on the Affine Hull of Orbits: From Mann's Mean Value Algorithm to Inertial Methods
- (Wen et al., 2016) A stochastic coordinate descent inertial primal-dual algorithm for large-scale composite optimization
- (He et al., 28 Oct 2025) A Two-step Krasnosel'skii-Mann Algorithm with Adaptive Momentum and Its Applications to Image Denoising and Matrix Completion