Contraction Analysis on Primal-Dual Gradient Optimization
Abstract: This paper analyzes the contraction of the primal-dual gradient optimization via contraction theory in the context of discrete-time updating dynamics. The contraction theory based on Riemannian manifolds is first established for convergence analysis of a convex optimization algorithm. The equality and inequality constrained optimization cases are studied, respectively. Under some reasonable assumptions, we construct the Riemannian metric to characterize a contraction region. It is shown that if the step-sizes of the updating dynamics are properly designed, the convergence rates for both cases can be obtained according to the contraction region in which the convergence can be guaranteed. Moreover, the augmented Lagrangian function which is projection free is adopted to tackle the inequality constraints. Some numerical experiments are simulated to demonstrate the effectiveness of the presented contraction analysis results on primal-dual gradient optimization algorithm.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.