Geometrically Convergent Iterative Methods
- Geometrically convergent iterative methods are algorithms that ensure error reduction at a rate proportional to a fixed constant less than one using contraction factors.
- They are widely applied in numerical analysis, optimization, and control theory, with convergence rates influenced by spectral properties and filter designs.
- Practical implementations, including fixed-point iterations, cyclic projection schemes, and Aitken extrapolation, demonstrate the efficiency and robustness of these methods.
A geometrically convergent iterative method is an algorithm where the sequence of iterates converges to the solution at a rate proportional to a power of a constant less than one; i.e., the error at iteration satisfies for some . Such methods are foundational in numerical analysis, optimization, control theory, and scientific computing where rapid and predictable convergence is essential. The convergence rate (also termed the contraction factor) is determined by spectral properties of the underlying operators, the design of the iteration, and, in advanced cases, by explicit convex optimization procedures.
1. Fundamental Principles of Geometric Convergence
The notion of geometric convergence applies to linear and nonlinear iterative processes. For a mapping (where could be a Banach space, cone metric, or a Euclidean space), the geometric convergence is established if there exists a fixed point such that
with and for all in a suitable domain. In the context of fixed-point iterations, this translates to linear convergence of the Picard sequence, with explicit geometric bounds provided under contractivity conditions and with suitable gauge/control functions in more abstract spaces (Proinov, 2015). In distributed optimization, geometric (or R-linear) convergence is proved for algorithms such as ATC-DIGing under assumptions of strong convexity, smoothness, and network connectivity, with explicit dependence of (or ) on algorithmic and spectral parameters (Nedić et al., 2016).
2. Geometrically Convergent Iterative Learning Control
In robust monotonic (geometric) convergent iterative learning control (ILC) for uncertain linear systems, the iterative update law is posed in either time or z-domain as: where is a robustness filter and is the learning filter, and the plant input/output follows (Su, 2020). The convergence is analyzed via the error propagation map
The robust geometric convergence is defined by the existence of such that
This is equivalently expressed as a matrix inequality for all . The optimal design reduces to solving a convex LMI/SOS program for minimal , which guarantees the geometric error decay. The order of the learning filter directly affects achievable , with higher order granting more rapid geometric convergence but increased computational overhead.
3. Geometric Iterations for Linear Systems
A classical geometrically convergent method for solving is the cyclic Kaczmarz-type projection scheme, where (Khugaev et al., 2010). Each iterate is the orthogonal projection onto the hyperplane , cycling through all rows of . The geometric convergence is captured as: where are the angles between the error and row normals. Convergence is geometric provided the system is consistent and row vectors are not mutually parallel. Compared to Jacobi and Gauss–Seidel, this method can be more robust to ill-scaling and is particularly useful for sparse or tomography-like problems.
4. Algorithmic Realizations and Spectral Rate Optimization
Many iterative schemes—including distributed optimization (e.g., ATC-DIGing), preconditioned eigensolvers, and nonlinear solvers—exhibit geometric convergence dictated by algorithmic parameters and spectral constraints. In ATC-DIGing, the geometric rate depends on agent step-sizes, smoothness, strong convexity, and graph connectivity: Explicit upper bounds for are derived via small-gain theorem analysis combining system, mixing, and algorithmic parameters (Nedić et al., 2016). For preconditioned steepest descent (PSD) on generalized eigenproblems, geometric convergence is enforced via the Rayleigh–Ritz acceleration: with a function of preconditioner quality and eigenvalue gaps, and strict improvement over fixed-step methods (Neymeyr, 2011).
5. Higher-Order Aitken Extrapolation and Divergence Acceleration
Aitken’s process is a classical extrapolation approach that accelerates geometric (or even divergent) convergence by removing the dominant mode of a geometric error series: If the iteration matrix of a fixed-point iteration satisfies , then the error decays geometrically at rate ; Aitken’s formula accelerates this to locally . Higher-order extensions successively deflate dominant eigenvalues, so that after -th order extrapolation, the convergence rate becomes (Tiruneh, 2013). Remarkably, Aitken can extract solutions even when the original iteration diverges (i.e., ), provided the extrapolation order removes all modes with modulus exceeding one.
6. Domain Decomposition and PDE Solvers: Geometric Convergence with Cross-Points
The Dirichlet–Neumann (DN) method is a domain decomposition technique for elliptic PDEs that achieves geometric convergence under suitably structured decompositions. For strip-like decompositions, DN converges at rate per iteration for relaxation parameter . In the presence of interior cross-points where multiple subdomains meet, a variant using even–odd decomposition of the solution and a rotated DN transmission condition restores well-posedness and preserves geometric convergence for both symmetric components (Chaudet-Dumas et al., 2023). The geometric rate is independent of mesh size and dimensionality; numerical experiments confirm convergence factors matching predicted for both 2D and 3D cases.
7. Abstract Generalizations and Applications
General convergence theorems in cone metric spaces ensure geometric convergence of iterative processes under contractivity conditions for Picard iteration or general mappings (Proinov, 2015). Applications include simultaneous root-finding (Weierstrass method), nonlinear system solvers, and fixed-point iterations where explicit geometric error bounds and residual-based estimates provide rigorous guarantees. The contraction factors (in normed or ordered spaces) directly quantify the rate of geometric decay, and functional frameworks for initial conditions, control functions, and completeness yield semilocal convergence results with precise a priori and a posteriori error bounds.