Papers
Topics
Authors
Recent
Search
2000 character limit reached

Constrained Nonlinear LOVO Problem

Updated 28 November 2025
  • The LOVO problem is defined as minimizing the minimum among several continuously differentiable black-box functions over a closed convex set, ensuring robust solutions under uncertain conditions.
  • It is reformulated using bilevel programming, MPCC techniques, and smoothing approaches that yield precise optimality conditions like L-stationarity and conic duality.
  • Derivative-free trust-region algorithms achieve global convergence with an O(ε⁻²) complexity, making them effective for robust estimation and applications such as protein alignment and portfolio optimization.

A constrained nonlinear Low Order-Value Optimization (LOVO) problem involves minimizing the minimum among a finite number of continuously differentiable function values, each typically accessible only as a black-box (derivative-free setting), within a nonempty closed convex constraint set. LOVO problems are fundamental in robust parameter estimation, protein alignment, portfolio optimization, and other areas where robustness to outliers and min-structure are essential. The constrained LOVO formulation, its associated optimality conditions, reformulations, algorithms, and complexity theory are presented with particular attention to rigorous, modern developments.

1. Mathematical Formulation of the Constrained LOVO Problem

The classical constrained nonlinear LOVO problem can be stated as: minxΩRnF(x):=min1irfi(x),\begin{aligned} &\min_{x \in \Omega \subset \mathbb{R}^n} F(x) := \min_{1 \leq i \leq r} f_i(x), \end{aligned} where

  • Ω\Omega is a nonempty, closed, convex subset of Rn\mathbb{R}^n,
  • Each fi:RnRf_i: \mathbb{R}^n \rightarrow \mathbb{R} is continuously differentiable with Lipschitz gradient on an open set containing Ω\Omega,
  • The functions fif_i are accessed as black-box routines (i.e., only function values available).

A generalized LOVO model, particularly when focusing on feasibility in possibly inconsistent problems, is given via a constraint violation measure: v(x):=i=1mmax{0,gi(x)}+j=1phj(x),v(x) := \sum_{i=1}^m \max\{0,\, g_i(x)\}+\sum_{j=1}^p |h_j(x)|, where gi(x)0g_i(x) \le 0 and hj(x)=0h_j(x) = 0 represent the constraints. The least-violation set is

X:=ArgminxRnv(x).X^* := \mathop{\mathrm{Argmin}}_{x \in \mathbb{R}^n} v(x).

The (generalized) LOVO problem optimizes f(x)f(x) over XX^*: minxXf(x).\min_{x\in X^*} f(x). If the original constraints admit feasible points, this reduces to a standard constrained nonlinear program; otherwise, LOVO provides minimum-violation solutions (Dai et al., 2020).

2. Reformulations and Optimality Conditions

The constrained LOVO problem admits reformulation as a bilevel or equilibrium-constrained program. For conic and affine constraints, define the squared-distance infeasibility measure: θ(x):=minzK12g(x)+z2, where K is a closed convex cone.\theta(x) := \min_{z\in K} \frac{1}{2}\|g(x)+z\|^2, \text{ where } K \text{ is a closed convex cone.} Then X=argminxRnθ(x)X^* = \arg\min_{x \in \mathbb{R}^n} \theta(x). Standard conic duality leads to a system characterizing XX^*: g(x)Ty=0, g(x)+z=0, yK,zK,y,z=0,\begin{aligned} & \nabla g(x)^T y = 0, \ & g(x) + z = 0, \ & y \in K^*,\, z \in K,\, \langle y, z \rangle = 0, \end{aligned} where KK^* is the dual cone. This yields an MPCC (Mathematical Program with Complementarity Constraints) formulation: minx,y,zf(x) subject toF(x,y,z)=0, (y,z)K×K,y,z=0.\begin{aligned} &\min_{x, y, z} f(x) \ &\text{subject to}\quad F(x, y, z) = 0, \ & (y, z) \in K^* \times K, \quad \langle y, z \rangle = 0. \end{aligned} The relevant optimality notion is L-stationarity (from Clarke's theory), which generalizes KKT conditions to nonsmooth, Lipschitz-constrained settings. At a solution (x,y,z)(x^*, y^*, z^*), existence of Lagrange multipliers and dual variables—even in the presence of complementarity—ensures generalized stationarity (Dai et al., 2020).

3. Derivative-Free Trust-Region Algorithm for Convex-Constrained LOVO

A derivative-free trust-region framework for the constrained LOVO problem targets settings where each fif_i is only accessible via function evaluation. The method maintains, at each iteration kk:

  • The current iterate xkΩx_k \in \Omega,
  • A model mkm_k (typically linear or quadratic interpolation) for some active fikf_{i_k} corresponding to ikI(xk)={i:fi(xk)=F(xk)}i_k \in I(x_k) = \{i : f_i(x_k) = F(x_k)\},
  • Radii δk\delta_k (sampling/model accuracy) and Δk\Delta_k (trust-region).

The main iteration consists of:

  • Model construction: mk(d)=bk+gkTd+12dTHkdm_k(d) = b_k + g_k^T d + \frac{1}{2} d^T H_k d over neighborhood B(xk,δk)B(x_k, \delta_k) such that

fik(x)mk(x)κgδk.\|\nabla f_{i_k}(x) - \nabla m_k(x)\| \leq \kappa_g \delta_k.

  • Trust-region subproblem: Approximately solve

mindmk(xk+d) subject to xk+dΩ,dΔk\min_d m_k(x_k + d)\ \text{subject to}\ x_k + d \in \Omega,\, \|d\| \leq \Delta_k

and require sufficient decrease:

mk(xk)mk(xk+dk)θπkmin{πk/κH,Δk,1}m_k(x_k) - m_k(x_k + d_k) \geq \theta \pi_k \min\{\pi_k/\kappa_H, \Delta_k, 1\}

with πk=PΩ(xkgk)xk\pi_k = \|P_\Omega(x_k - g_k) - x_k\|.

  • Ratio test and step acceptance: Compute

ρk=F(xk)F(xk+dk)mk(xk)mk(xk+dk)\rho_k = \frac{F(x_k) - F(x_k + d_k)}{m_k(x_k) - m_k(x_k + d_k)}

and update xk+1x_{k+1}, Δk\Delta_k, and δk\delta_k by standard rules (Schwertner et al., 25 Nov 2025).

This algorithm converges globally (with accumulation points being weakly critical), under minimal regularity. The stationarity measure is the projected gradient for some iI(x)i\in I(x^*).

4. Stationarity Concepts and Theoretical Guarantees

  • Weak criticality: xx^* is weakly critical if there exists iI(x)i \in I(x^*) with PΩ(xfi(x))=xP_\Omega(x^* - \nabla f_i(x^*)) = x^*.
  • Strong criticality: If this holds for all iI(x)i \in I(x^*).

For iterates {xk}\{x_k\} generated by the algorithm,

lim infkPΩ(xkfik(xk))xk=0,\liminf_{k\to\infty} \|P_\Omega(x_k - \nabla f_{i_k}(x_k)) - x_k\| = 0,

and any subsequential limit point xx^* is weakly critical. The O(ϵ2)(\epsilon^{-2}) worst-case iteration complexity to reach ϵ\epsilon-criticality matches the best rates for smooth, derivative-free trust-region algorithms (Schwertner et al., 25 Nov 2025).

5. Reformulations via Lipschitz-Constrained and Dynamical Systems Approaches

The LOVO setup is highly adaptable for infeasible or inconsistent constraints. Reformulation as a Lipschitz equality-constrained problem (via squared infeasibility, as above) leads to practical algorithms based on smoothing and penalization. The smoothing Fischer–Burmeister (sFB) approach replaces non-differentiable complementarity with a continuously differentiable proxy: Φϵ(a,b):=a+ba2+b2+2ϵ2\Phi_\epsilon(a, b) := a + b - \sqrt{a^2 + b^2 + 2\epsilon^2} for ϵ>0\epsilon > 0, converging to the MPCC solution as ϵ0\epsilon \to 0. Convergence to L-stationary points is established under standard conditions (Dai et al., 2020).

A distinct dynamical systems perspective transforms the original constrained NLP into an autonomous ODE

x˙=P(x)f(x)\dot{x} = -P(x)\nabla f(x)

where P(x)P(x) is a (possibly pseudo-inverse) projection onto the tangent of the active constraints, yielding asymptotic convergence to KKT points under compactness and standard regularity assumptions (Zhang et al., 2018). This treatment provides analytic multipliers throughout the trajectory, even in the presence of dependent active constraints.

6. Numerical Methods and Software Implementations

  • LOWDER: An open-source Julia package for derivative-free, trust-region LOVO solves trust-region subproblems using TRSBOX/ALTMOV (from BOBYQA), maintains set poisedness for interpolation points, and manages function call budgets efficiently via inexact ratios. Stopping criteria are based on minimal radii and lack of progress (Schwertner et al., 25 Nov 2025).
  • Smoothing FB: Algorithms implement smoothing parameter reduction with off-the-shelf NLP solvers for the relaxed problems, yielding convergence to L-stationary points as smoothing vanishes (Dai et al., 2020).
  • ODE Integration: Recommended solvers are variable-step, implicit methods suitable for stiff dynamics (e.g., Radau IIA, MATLAB’s ode15s) with constraint activation logic for the vector field definition (Zhang et al., 2018).

Comparison on test suites (Moré–Wild, HS, and synthetic QD problems) with MS-P and NOMAD shows that specialized LOVO approaches, particularly those exploiting the min-structure, can deliver robustness and efficiency, especially as the number of min-components grows (Schwertner et al., 25 Nov 2025).

7. Complexity, Convergence, and Practical Considerations

  • Iteration and sample complexity: The number of successful iterations to ϵ\epsilon-criticality is O(ϵ1)O(\epsilon^{-1}), total is O(ϵ2)O(\epsilon^{-2}). The evaluation complexity for linear interpolation is O((n+r)n3ϵ2)O((n+r)n^3\epsilon^{-2}) (Schwertner et al., 25 Nov 2025).
  • Assumptions: Continuity, Lipschitzness, compactness of constraint set, fully-poised interpolation sets, and for smoothing schemes, regularity of constraint Jacobians for all smoothing levels.
  • Applicability: The general MPCC/smoothing and trust-region frameworks are robust to constraint inconsistency and black-box function access. For feasible problems, solutions coincide with classical NLP solutions.

The LOVO paradigm thus unifies robust constrained optimization, equilibrium-constrained reformulations, and derivative-free computation within a rigorous, provably convergent theoretical and algorithmic infrastructure (Schwertner et al., 25 Nov 2025, Dai et al., 2020, Zhang et al., 2018).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Constrained Nonlinear Optimization LOVO Problem.