The Golden Ratio Primal-Dual Algorithm with Two New Stepsize Rules for Convex-Concave Saddle Point Problems
Abstract: In this paper, we present two stepsize rules for the extended Golden Ratio primal-dual algorithm (E-GRPDA) designed to address structured convex optimization problems in finite-dimensional real Hilbert spaces. The first rule features a nonincreasing primal stepsize that remains bounded below by a positive constant and is updated adaptively at each iteration, eliminating the need for the Lipschitz constant of the gradient of the function and the norm of the operator involved. The second stepsize rule is adaptive, adjusting based on the local smoothness of the smooth component function and the local estimate of the norm of the operator. In other words, we present an adaptive version of the E-GRPDA algorithm. Importantly, both methods avoid the use of backtracking to estimate the operator norm. We prove that E-GRPDA achieves an ergodic sublinear convergence rate with both stepsize rules, evaluated using the primal-dual gap function. Additionally, we establish an R-linear convergence rate for E-GRPDA with the first stepsize rule, under some standard assumptions and with appropriately chosen parameters. Through numerical experiments on various convex optimization problems, we demonstrate the effectiveness of our approaches and compare their performance to the existing ones.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.