Papers
Topics
Authors
Recent
Search
2000 character limit reached

Geometrically Convergent Iterative Methods

Updated 30 January 2026
  • Geometrically convergent iterative methods are algorithms that ensure error reduction at a rate proportional to a fixed constant less than one using contraction factors.
  • They are widely applied in numerical analysis, optimization, and control theory, with convergence rates influenced by spectral properties and filter designs.
  • Practical implementations, including fixed-point iterations, cyclic projection schemes, and Aitken extrapolation, demonstrate the efficiency and robustness of these methods.

A geometrically convergent iterative method is an algorithm where the sequence of iterates converges to the solution at a rate proportional to a power of a constant less than one; i.e., the error at iteration kk satisfies x(k)xγkx(0)x\|x^{(k)} - x^*\| \leq \gamma^k \|x^{(0)} - x^*\| for some 0<γ<10 < \gamma < 1. Such methods are foundational in numerical analysis, optimization, control theory, and scientific computing where rapid and predictable convergence is essential. The convergence rate γ\gamma (also termed the contraction factor) is determined by spectral properties of the underlying operators, the design of the iteration, and, in advanced cases, by explicit convex optimization procedures.

1. Fundamental Principles of Geometric Convergence

The notion of geometric convergence applies to linear and nonlinear iterative processes. For a mapping T:XXT:X \to X (where XX could be a Banach space, cone metric, or a Euclidean space), the geometric convergence is established if there exists a fixed point xx^* such that

d(xn+1,x)γd(xn,x)d(x_{n+1},x^*) \leq \gamma\,d(x_n,x^*)

with 0<γ<10 < \gamma < 1 and for all nn in a suitable domain. In the context of fixed-point iterations, this translates to linear convergence of the Picard sequence, with explicit geometric bounds provided under contractivity conditions and with suitable gauge/control functions in more abstract spaces (Proinov, 2015). In distributed optimization, geometric (or R-linear) convergence is proved for algorithms such as ATC-DIGing under assumptions of strong convexity, smoothness, and network connectivity, with explicit dependence of γ\gamma (or λ\lambda) on algorithmic and spectral parameters (Nedić et al., 2016).

2. Geometrically Convergent Iterative Learning Control

In robust monotonic (geometric) convergent iterative learning control (ILC) for uncertain linear systems, the iterative update law is posed in either time or z-domain as: uk+1(t)=Q(q)[uk(t)+L(q)ek(t+1)]u_{k+1}(t) = Q(q)[u_k(t) + L(q) e_k(t+1)] where Q(q)Q(q) is a robustness filter and L(q)L(q) is the learning filter, and the plant input/output follows yk=P(q,λ)uk+dy_k = P(q,\lambda) u_k + d (Su, 2020). The convergence is analyzed via the error propagation map

ek+1e=A(λ)(eke),A(λ)=P(λ)Q(ILP(λ))P(λ)1e_{k+1}-e_\infty = A(\lambda)(e_k - e_\infty), \qquad A(\lambda) = P(\lambda) Q(I-LP(\lambda)) P(\lambda)^{-1}

The robust geometric convergence is defined by the existence of γ<1\gamma < 1 such that

ek+1e2γeke2 for all λΛ,k\|e_{k+1} - e_\infty\|_2 \leq \gamma \|e_k - e_\infty\|_2 \text{ for all } \lambda \in \Lambda, k

This is equivalently expressed as a matrix inequality γ2IA(λ)A(λ)0\gamma^2 I - A(\lambda)^\top A(\lambda) \succ 0 for all λ\lambda. The optimal design reduces to solving a convex LMI/SOS program for minimal γ\gamma, which guarantees the geometric error decay. The order of the learning filter L(q)L(q) directly affects achievable γ\gamma^*, with higher order granting more rapid geometric convergence but increased computational overhead.

3. Geometric Iterations for Linear Systems

A classical geometrically convergent method for solving Ax=bA x = b is the cyclic Kaczmarz-type projection scheme, where x(k+1)=x(k)+(bi(k)ai(k)Tx(k))ai(k)x^{(k+1)} = x^{(k)} + (b_{i(k)} - a_{i(k)}^T x^{(k)}) a_{i(k)} (Khugaev et al., 2010). Each iterate is the orthogonal projection onto the hyperplane Hi={x:aiTx=bi}H_i = \{x : a_i^T x = b_i\}, cycling through all rows of AA. The geometric convergence is captured as: x(k+n)x2ρx(k)x2,ρ=i=1ncosθi<1\|x^{(k+n)} - x^*\|_2 \leq \rho\,\|x^{(k)} - x^*\|_2, \qquad \rho = \prod_{i=1}^n |\cos\theta_i| < 1 where θi\theta_i are the angles between the error and row normals. Convergence is geometric provided the system is consistent and row vectors are not mutually parallel. Compared to Jacobi and Gauss–Seidel, this method can be more robust to ill-scaling and is particularly useful for sparse or tomography-like problems.

4. Algorithmic Realizations and Spectral Rate Optimization

Many iterative schemes—including distributed optimization (e.g., ATC-DIGing), preconditioned eigensolvers, and nonlinear solvers—exhibit geometric convergence dictated by algorithmic parameters and spectral constraints. In ATC-DIGing, the geometric rate λ\lambda depends on agent step-sizes, smoothness, strong convexity, and graph connectivity: xk1(x)FCλk\|\mathbf{x}_k - \mathbf{1}(x^*)'\|_F \leq C \lambda^k Explicit upper bounds for λ\lambda are derived via small-gain theorem analysis combining system, mixing, and algorithmic parameters (Nedić et al., 2016). For preconditioned steepest descent (PSD) on generalized eigenproblems, geometric convergence is enforced via the Rayleigh–Ritz acceleration: ρ(x)λiλi+1ρ(x)σ2ρ(x)λiλi+1ρ(x)\frac{\rho(x')-\lambda_i}{\lambda_{i+1}-\rho(x')} \leq \sigma^2 \frac{\rho(x)-\lambda_i}{\lambda_{i+1}-\rho(x)} with σ\sigma a function of preconditioner quality and eigenvalue gaps, and strict improvement over fixed-step methods (Neymeyr, 2011).

5. Higher-Order Aitken Extrapolation and Divergence Acceleration

Aitken’s Δ2\Delta^2 process is a classical extrapolation approach that accelerates geometric (or even divergent) convergence by removing the dominant mode of a geometric error series: xxk(Δxk)2Δ2xkx^* \approx x_k - \frac{(\Delta x_k)^2}{\Delta^2 x_k} If the iteration matrix MM of a fixed-point iteration x(k+1)=Mx(k)+cx^{(k+1)} = Mx^{(k)} + c satisfies ρ(M)<1\rho(M) < 1, then the error decays geometrically at rate ρ(M)\rho(M); Aitken’s formula accelerates this to locally ρ(M)2\rho(M)^2. Higher-order extensions successively deflate dominant eigenvalues, so that after kk-th order extrapolation, the convergence rate becomes λk+1|\lambda_{k+1}| (Tiruneh, 2013). Remarkably, Aitken can extract solutions even when the original iteration diverges (i.e., ρ(M)>1\rho(M)>1), provided the extrapolation order removes all modes with modulus exceeding one.

6. Domain Decomposition and PDE Solvers: Geometric Convergence with Cross-Points

The Dirichlet–Neumann (DN) method is a domain decomposition technique for elliptic PDEs that achieves geometric convergence under suitably structured decompositions. For strip-like decompositions, DN converges at rate ρ=12θ\rho = |1-2\theta| per iteration for relaxation parameter θ(0,1)\theta\in(0,1). In the presence of interior cross-points where multiple subdomains meet, a variant using even–odd decomposition of the solution and a rotated DN transmission condition restores well-posedness and preserves geometric convergence for both symmetric components (Chaudet-Dumas et al., 2023). The geometric rate is independent of mesh size and dimensionality; numerical experiments confirm convergence factors matching predicted ρ\rho for both 2D and 3D cases.

7. Abstract Generalizations and Applications

General convergence theorems in cone metric spaces ensure geometric convergence of iterative processes under contractivity conditions for Picard iteration or general mappings TT (Proinov, 2015). Applications include simultaneous root-finding (Weierstrass method), nonlinear system solvers, and fixed-point iterations where explicit geometric error bounds and residual-based estimates provide rigorous guarantees. The contraction factors KK (in normed or ordered spaces) directly quantify the rate of geometric decay, and functional frameworks for initial conditions, control functions, and completeness yield semilocal convergence results with precise a priori and a posteriori error bounds.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Geometrically Convergent Iterative Method.