Papers
Topics
Authors
Recent
Search
2000 character limit reached

Path-Following Line Search in Optimization

Updated 31 January 2026
  • Path-following line search is a technique that projects a search direction onto the feasible region, forming a continuous, piecewise-linear path until bounds are reached.
  • It employs quasi-Wolfe conditions to handle nondifferentiability and ensure sufficient decrease, thereby reducing the number of function and derivative evaluations.
  • The method accelerates active-set identification in both active-set and interior-point frameworks, achieving 2–5× improvements over standard Armijo-only strategies.

Path-following line search methods for bound-constrained optimization involve performing a search along continuous, piecewise-linear paths formed by projecting a search direction onto the feasible region. These techniques enable efficient and robust optimization when explicit bounds are imposed on the variables, as standard smooth line search methods such as Wolfe cannot be directly applied due to nondifferentiability of the objective function along the projected search path. Projected-search strategies, particularly those leveraging quasi-Wolfe line search conditions, demonstrate strong performance in both active-set and interior-point frameworks by accelerating active-set identification and reducing the number of required function and derivative evaluations (Ferry et al., 2021).

1. Projected-Search Path Definition

Consider the bound-constrained optimization problem: minxn f(x)subject toxu\min_{x \in \Re^n}\ f(x)\quad \text{subject to} \quad \ell \le x \le u where ,un\ell, u \in \Re^n are vectors specifying the lower and upper bounds. For a feasible point xx and any search direction pp, the projected-search path is defined by

x(α)=Π(x+αp)x(\alpha) = \Pi(x + \alpha p)

where [Π(y)]i=min{ui,max{i,yi}}[\Pi(y)]_i = \min\{u_i, \max\{\ell_i, y_i\}\}, so each xi(α)x_i(\alpha) moves linearly with α\alpha until it hits the corresponding bound, at which point it remains fixed. The scalar values

κi={(ixi)/pi,pi<0 (uixi)/pi,pi>0 +,pi=0\kappa_i = \begin{cases} (\ell_i - x_i)/p_i, & p_i < 0 \ (u_i - x_i)/p_i, & p_i > 0 \ +\infty, & p_i = 0 \end{cases}

indicate the "kink-points" where the path changes slope. Between consecutive kink points, the path x(α)x(\alpha) is linear with respect to the remaining free variables. Upon encountering a breakpoint α=κ(j)\alpha = \kappa_{(j)}, one variable hits its bound, producing a continuous, piecewise-linear curve.

2. Quasi-Wolfe Line Search Conditions

For projected-search trajectories, the objective function restricted to the path is

ϕ(α)=f(x(α))\phi(\alpha) = f(x(\alpha))

which typically becomes piecewise differentiable with one-sided derivatives

ϕ+(α)=limh0ϕ(α+h)ϕ(α)h,ϕ(α)=limh0ϕ(α+h)ϕ(α)h\phi'_+(\alpha) = \lim_{h \downarrow 0} \frac{\phi(\alpha + h) - \phi(\alpha)}{h}, \qquad \phi'_-(\alpha) = \lim_{h \uparrow 0} \frac{\phi(\alpha + h) - \phi(\alpha)}{h}

The quasi-Wolfe step α\alpha is accepted if it meets the following criteria:

  • (C1) Quasi-Armijo: ϕ(α)ϕ(0)+αϕ(0)\phi(\alpha) \leq \phi(0) + \alpha\,\phi'(0)
  • At least one of:
    • (C2): ϕ+(α)σϕ(0)|\phi'_+(\alpha)| \leq \sigma |\phi'(0)|
    • (C3): ϕ(α)σϕ(0)|\phi'_-(\alpha)| \leq \sigma |\phi'(0)|
    • (C4): ϕ\phi is non-differentiable at α\alpha and ϕ+(α)0ϕ(α)\phi'_+(\alpha) \leq 0 \leq \phi'_-(\alpha)

where 0<β<σ<10 < \beta < \sigma < 1.

When ϕ\phi is smooth, these conditions reduce to classical Wolfe criteria. A shifted residual

ω(α)=ϕ(α)[ϕ(0)+αϕ(0)]\omega(\alpha) = \phi(\alpha) - [\phi(0) + \alpha\,\phi'(0)]

and its one-sided derivatives ω+,ω\omega'_+, \omega'_- are also employed in identifying suitable steps.

3. Algorithmic Workflow and Search Acceleration

The quasi-Wolfe line search is implemented via a two-stage algorithmic workflow:

  • Stage One: Backtracking search with step sizes αj=min{βjαhi, αmax}\alpha_j = \min \{\beta^{-j} \cdot \alpha_{\text{hi}},\ \alpha_{\max}\}, evaluating ϕ(αj)\phi(\alpha_j) and ϕ+(αj)\phi'_+(\alpha_j). If a step satisfies (C1)–(C4), it is accepted. Otherwise, when function values plateau or fail to decrease, the algorithm transitions to stage two.
  • Stage Two: Within the interval [αlo,αhi][\alpha_{\text{lo}}, \alpha_{\text{hi}}], a more refined search is performed, exploiting kink-based selection—testing the nearest kink—or safeguarded interpolation (cubic or quadratic) using available function and derivative values.

Practical implementations maintain a sorted list of kink points (using O(nlogn)O(n\log n) heapsort), testing each for satisfaction of the quasi-Wolfe conditions before resorting to interpolation. This approach leverages the structure of the piecewise-linear path, accelerating convergence and reducing computational overhead.

4. Comparative Analysis with Armijo and Wolfe Searches

Armijo-only backtracking applies only condition (C1), resulting in frequent excessive step size reductions since it fails to account for curvature changes when the path direction is altered by hitting bounds. Classical Wolfe search requires differentiability and reliably enforces positive curvature, ensuring that quasi-Newton updating is well-posed. In contrast, the quasi-Wolfe conditions inherit the interpolation advantages from standard Wolfe searches, enabling tight curvature control even on nondifferentiable, piecewise-linear paths (Ferry et al., 2021).

Observed empirical performance shows that the quasi-Wolfe search dramatically reduces the number of function and derivative evaluations necessary compared to Armijo-only methods, with speed-ups of 2–5× on large test sets. This efficiency and robustness are especially pronounced in large-scale, bound-constrained optimization problems.

5. Key Convergence Properties

Major theoretical results establish the robustness of path-following line searches:

  • Theorem A (Active set + quasi-Armijo): For smooth ff and bounded level sets, projected-search iterates with Armijo steps satisfy limkf(xk)Tpk=0\lim_{k \to \infty} f(x_k)^T p_k = 0.
  • Theorem B (Active set + quasi-Wolfe): Under the same assumptions with quasi-Wolfe steps, limkf(xk)Tpk=0\lim_{k \to \infty} |f(x_k)^T p_k| = 0, ensuring the projected gradient vanishes.
  • Theorem C (Finite active-set identification): If xx^* is a nondegenerate stationary point, eventual identification of the true active set A(xk)=A(x)A(x_k) = A(x^*) occurs, after which the method locally reduces to unconstrained quasi-Newton.
  • Theorem D (Interior-point projected search): For log-barrier or primal-dual penalty subproblems M(v;μ)M(v;\mu), use of quasi-Armijo or quasi-Wolfe search yields limkM(vk)Tdk=0\lim_{k \to \infty} |\nabla M(v_k)^T d_k| = 0. With suitable reduction of μ\mu, optimal solutions for the bound-constrained problem are obtained.

A plausible implication is that these results ensure both global convergence and robust internal identification of critical variables, underpinning the superlinear efficiency observed in numerical experiments.

6. Classes of Projected-Search Methods

Two principal projected-search frameworks are established:

(a) Projected-search active-set methods

  • Maintain a dynamic working set WkW_k of nearly active constraints.
  • Solve a reduced quasi-Newton subproblem, enforcing di=0d_i = 0 for iWki \in W_k.
  • Direction pkp_k is trimmed to point into the interior.
  • The quasi-Wolfe search is performed along x(α)=Π(xk+αpk)x(\alpha) = \Pi(x_k + \alpha p_k), followed by update of xk+1x_{k+1} and the working set tolerance ϵk+1\epsilon_{k+1}.

(b) Projected-search primal-dual interior-point methods

  • Introduce augmented variables v=(x,z1,z2)v = (x, z_1, z_2).
  • Objective M(v;μ)=f(x)μjln(xjj)μjln(ujxj)M(v; \mu) = f(x) - \mu \sum_j \ln(x_j - \ell_j) - \mu \sum_j \ln(u_j - x_j) - \dots
  • Newton direction dkd_k is projected onto the fraction-to-boundary box, vkτ(vk)vvk+τ(uvk)v_k - \tau(v_k - \ell) \le v \le v_k + \tau(u - v_k) for τ<1\tau < 1.
  • Quasi-Wolfe search is executed along v(α)=Π(vk+αdk)v(\alpha) = \Pi(v_k + \alpha d_k), with subsequent updates of variables and barrier parameter.

This formalism yields an efficient and robust search in both active-set and interior-point contexts.

7. Computational Benchmarks

Empirical results demonstrate the superiority of the quasi-Wolfe path-following line search:

  • On CUTEst problems (154 in total), UBOPT with quasi-Wolfe solved 148, UBOPT with quasi-Armijo solved 145, and classical L-BFGS-B solved 138.
  • Function-evaluation performance profiles using log2 scale: UBOPT-qWolfe is best on ~60% of problems (ratio ≤ 1), UBOPT-qArmijo ~20%, L-BFGS-B ~15%.
  • Function-evaluation reduction for quasi-Wolfe search is 2–3× on average over Armijo-only.
  • Primal-dual interior-point code PDproj-qWolfe solved 128/137 box problems (≤1000 variables) within 500 iterations versus 112/137 for PD-Wolfe, with iteration and evaluation profiles indicating median speed-up ≈1.8× for PDproj-qWolfe.

This suggests that path-following quasi-Wolfe line search methods enhance both the reliability and computational efficiency of bound-constrained optimization, streamline active-set identification, and integrate seamlessly into modern active-set and interior-point optimization frameworks (Ferry et al., 2021).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Path-Following Line Search.