Papers
Topics
Authors
Recent
Search
2000 character limit reached

Global Convergence Analysis of the Power Proximal Point and Augmented Lagrangian Method

Published 19 Dec 2023 in math.OC | (2312.12205v2)

Abstract: In this paper we study an unconventional inexact Augmented Lagrangian Method (ALM) for convex optimization problems, as first proposed by Bertsekas, herein the penalty term is a potentially non-Euclidean norm raised to a power between one and two. We analyze the algorithm through the lens of a nonlinear Proximal Point Method (PPM), as originally introduced by Luque, applied to the dual problem. While Luque analyzes the order of local convergence of the scheme with Euclidean norms our focus is on the non-Euclidean case which prevents us from using standard tools for the analysis such as the nonexpansiveness of the proximal mapping. To allow for errors in the primal update, we derive two implementable stopping criteria under which we analyze both the global and the local convergence rates of the algorithm. More specifically, we show that the method enjoys a fast sublinear global rate in general and a local superlinear rate under suitable growth assumptions. We also highlight that the power ALM can be interpreted as classical ALM with an implicitly defined penalty-parameter schedule, reducing its parameter dependence. Our experiments on a number of relevant problems suggest that for certain powers the method performs similarly to a classical ALM with fine-tuned adaptive penalty rule, despite involving fewer parameters.

Citations (2)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.