Papers
Topics
Authors
Recent
Search
2000 character limit reached

Augmented Lagrangian methods for infeasible convex optimization problems and diverging proximal-point algorithms

Published 27 Jun 2025 in math.OC and math.NA | (2506.22428v1)

Abstract: This work investigates the convergence behavior of augmented Lagrangian methods (ALMs) when applied to convex optimization problems that may be infeasible. ALMs are a popular class of algorithms for solving constrained optimization problems. We establish progressively stronger convergence results, ranging from basic sequence convergence to precise convergence rates, under a hierarchy of assumptions. In particular, we demonstrate that, under mild assumptions, the sequences of iterates generated by ALMs converge to solutions of the ``closest feasible problem''. This study leverages the classical relationship between ALMs and the proximal-point algorithm applied to the dual problem. A key technical contribution is a set of concise results on the behavior of the proximal-point algorithm when applied to functions that may not have minimizers. These results pertain to its convergence in terms of its subgradients and of the values of the convex conjugate.

Summary

  • The paper establishes that augmented Lagrangian methods converge to the closest feasible problem even in infeasible convex optimization scenarios.
  • It leverages the connection between ALMs and proximal point algorithms to analyze dual formulations and precise convergence rates.
  • The study extends robustness claims for ALM in convex settings, providing practical insights for implementation in areas like machine learning and control systems.

Augmented Lagrangian Methods for Infeasible Convex Optimization Problems

Introduction

The paper "Augmented Lagrangian methods for infeasible convex optimization problems and diverging proximal-point algorithms" explores the augmented Lagrangian methods (ALMs) for convex optimization problems that may not have feasible solutions. ALMs are widely used in constrained optimization, where the paper investigates scenarios when the feasible set is empty and provides convergence results under various assumptions.

Key Contributions and Results

  1. Convergence Properties: The paper establishes convergence results from basic sequence convergence to precise rates under different assumptions, demonstrating that ALMs converge to solutions of the "closest feasible problem."
  2. Relationship with Proximal-Point Algorithm: It leverages the connection between ALMs and the proximal-point algorithm applied to the dual problem, offering new insights into their behavior when minimizers are absent.
  3. Convergence to Closest Feasible Problem: It provides conditions under which ALMs converge to this problem, defined by minimizing the objective value over points with the minimum constraint violation.
  4. Augmented Lagrangian Method (ALM) Formulation: ALM for general convex optimization problems is formalized, showing equivalence to proximal point steps applied to their duals.
  5. Behavior in Infeasible Scenarios: The paper explores the infeasibility scenarios and extends the robustness of ALM against such conditions.

Practical Implementation

The implementation of the concepts from this paper involves:

  • Lagrangian Construction: For a convex optimization problem possibly without solutions, the augmented Lagrangian is given by

$L_\gamma(x, s, \lambda) = f(x) + \delta_{ \CC}(x,s) - \langle \lambda, s \rangle + \frac{\gamma}{2} \|s\|^2.$

Here, $(x,s) \in \CC$ denotes the feasible set, and f(x)f(x) is the closed convex function.

  • Iterative Minimization Procedure: Using the ALM, iteratively solve the subproblem for (xk+1,sk+1)(x^{k+1}, s^{k+1}) that minimizes LγL_\gamma, then update the dual variables λk+1=λk−γksk+1\lambda^{k+1} = \lambda^k - \gamma_k s^{k+1}.
  • Proximal Point Reformulation: ALM iterations can be reshaped as proximal point iterations on the dual function, beneficial for dual optimization.

Assumptions and Conditions

The convergence results and robustness claims rely on several critical assumptions:

  • Assumptions on the hierarchy of convergence, including conditions like the finest conditions involving subdifferentiability.
  • Boundedness properties where (εk)k∈N(\varepsilon_k)_{k\in\mathbb{N}} errors vanish sufficiently fast.
  • Step size requirements and error controls ensuring the conditions for IALM are met.

Implications and Future Work

This research offers significant insights into employing ALM for practical constrained optimization problems, particularly in scenarios with potential infeasibility. Theoretical guarantees for convergence, even without feasible solutions, increase the applicability of ALM in fields like machine learning and control systems. Future work could involve empirical validation across diverse infeasible optimization scenarios or extending theoretical foundations to broader classes of non-convex problems.

Conclusion

The study advances the understanding of augmented Lagrangian methods by extending their application to infeasible cases and establishing robust convergence properties connected to the proximal-point methodologies. Such work sets a foundation for addressing optimization problems in emerging fields that require robustness against infeasibility and constraint violations.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 1 like about this paper.