Papers
Topics
Authors
Recent
Search
2000 character limit reached

Complexity of linearized quadratic penalty for optimization with nonlinear equality constraints

Published 23 Feb 2024 in math.OC | (2402.15639v2)

Abstract: In this paper we consider a nonconvex optimization problem with nonlinear equality constraints. We assume that both, the objective function and the functional constraints, are locally smooth. For solving this problem, we propose a linearized quadratic penalty method, i.e., we linearize the objective function and the functional constraints in the penalty formulation at the current iterate and add a quadratic regularization, thus yielding a subproblem that is easy to solve, and whose solution is the next iterate. Under a new adaptive regularization parameter choice, we provide convergence guarantees for the iterates of this method to an $\epsilon$ first-order optimal solution in $\mathcal{O}({\epsilon{-2.5}})$ iterations. Finally, we show that when the problem data satisfy Kurdyka-Lojasiewicz property, e.g., are semialgebraic, the whole sequence generated by the proposed algorithm converges and we derive improved local convergence rates depending on the KL parameter. We validate the theory and the performance of the proposed algorithm by numerically comparing it with some existing methods from the literature.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.