Anderson Acceleration for Distributed Constrained Optimization over Time-varying Networks
Abstract: This paper applies the Anderson Acceleration (AA) technique to accelerate the Fenchel dual gradient method (FDGM) to solve constrained optimization problems over time-varying networks. AA is originally designed for accelerating fixed-point iterations, and its direct application to FDGM faces two challenges: 1) FDGM in time-varying networks cannot be formulated as a standard fixed-point update; 2) even if the network is fixed so that FDGM can be expressed as a fixed-point iteration, the direct application of AA is not distributively implementable. To overcome these challenges, we first rewrite each update of FDGM as inexactly solving several \emph{local} problems where each local problem involves two neighboring nodes only, and then incorporate AA to solve each local problem with higher accuracy, resulting in the Fenchel Dual Gradient Method with Anderson Acceleration (FDGM-AA). To guarantee global convergence of FDGM-AA, we equip it with a newly designed safe-guard scheme. Under mild conditions, our algorithm converges at a rate of (O(1/\sqrt{k})) for the primal sequence and (O(1/k)) for the dual sequence. The competitive performance of our algorithm is validated through numerical experiments.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.