A generic adaptive restart scheme with applications to saddle point algorithms
Abstract: We provide a simple and generic adaptive restart scheme for convex optimization that is able to achieve worst-case bounds matching (up to constant multiplicative factors) optimal restart schemes that require knowledge of problem specific constants. The scheme triggers restarts whenever there is sufficient reduction of a distance-based potential function. This potential function is always computable. We apply the scheme to obtain the first adaptive restart algorithm for saddle-point algorithms including primal-dual hybrid gradient (PDHG) and extragradient. The method improves the worst-case bounds of PDHG on bilinear games, and numerical experiments on quadratic assignment problems and matrix games demonstrate dramatic improvements for obtaining high-accuracy solutions. Additionally, for accelerated gradient descent (AGD), this scheme obtains a worst-case bound within 60% of the bound achieved by the (unknown) optimal restart period when high accuracy is desired. In practice, the scheme is competitive with the heuristic of O'Donoghue and Candes (2015).
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.