Path-Following Line Search in Optimization
- Path-following line search is a technique that projects a search direction onto the feasible region, forming a continuous, piecewise-linear path until bounds are reached.
- It employs quasi-Wolfe conditions to handle nondifferentiability and ensure sufficient decrease, thereby reducing the number of function and derivative evaluations.
- The method accelerates active-set identification in both active-set and interior-point frameworks, achieving 2–5× improvements over standard Armijo-only strategies.
Path-following line search methods for bound-constrained optimization involve performing a search along continuous, piecewise-linear paths formed by projecting a search direction onto the feasible region. These techniques enable efficient and robust optimization when explicit bounds are imposed on the variables, as standard smooth line search methods such as Wolfe cannot be directly applied due to nondifferentiability of the objective function along the projected search path. Projected-search strategies, particularly those leveraging quasi-Wolfe line search conditions, demonstrate strong performance in both active-set and interior-point frameworks by accelerating active-set identification and reducing the number of required function and derivative evaluations (Ferry et al., 2021).
1. Projected-Search Path Definition
Consider the bound-constrained optimization problem: where are vectors specifying the lower and upper bounds. For a feasible point and any search direction , the projected-search path is defined by
where , so each moves linearly with until it hits the corresponding bound, at which point it remains fixed. The scalar values
indicate the "kink-points" where the path changes slope. Between consecutive kink points, the path is linear with respect to the remaining free variables. Upon encountering a breakpoint , one variable hits its bound, producing a continuous, piecewise-linear curve.
2. Quasi-Wolfe Line Search Conditions
For projected-search trajectories, the objective function restricted to the path is
which typically becomes piecewise differentiable with one-sided derivatives
The quasi-Wolfe step is accepted if it meets the following criteria:
- (C1) Quasi-Armijo:
- At least one of:
- (C2):
- (C3):
- (C4): is non-differentiable at and
where .
When is smooth, these conditions reduce to classical Wolfe criteria. A shifted residual
and its one-sided derivatives are also employed in identifying suitable steps.
3. Algorithmic Workflow and Search Acceleration
The quasi-Wolfe line search is implemented via a two-stage algorithmic workflow:
- Stage One: Backtracking search with step sizes , evaluating and . If a step satisfies (C1)–(C4), it is accepted. Otherwise, when function values plateau or fail to decrease, the algorithm transitions to stage two.
- Stage Two: Within the interval , a more refined search is performed, exploiting kink-based selection—testing the nearest kink—or safeguarded interpolation (cubic or quadratic) using available function and derivative values.
Practical implementations maintain a sorted list of kink points (using heapsort), testing each for satisfaction of the quasi-Wolfe conditions before resorting to interpolation. This approach leverages the structure of the piecewise-linear path, accelerating convergence and reducing computational overhead.
4. Comparative Analysis with Armijo and Wolfe Searches
Armijo-only backtracking applies only condition (C1), resulting in frequent excessive step size reductions since it fails to account for curvature changes when the path direction is altered by hitting bounds. Classical Wolfe search requires differentiability and reliably enforces positive curvature, ensuring that quasi-Newton updating is well-posed. In contrast, the quasi-Wolfe conditions inherit the interpolation advantages from standard Wolfe searches, enabling tight curvature control even on nondifferentiable, piecewise-linear paths (Ferry et al., 2021).
Observed empirical performance shows that the quasi-Wolfe search dramatically reduces the number of function and derivative evaluations necessary compared to Armijo-only methods, with speed-ups of 2–5× on large test sets. This efficiency and robustness are especially pronounced in large-scale, bound-constrained optimization problems.
5. Key Convergence Properties
Major theoretical results establish the robustness of path-following line searches:
- Theorem A (Active set + quasi-Armijo): For smooth and bounded level sets, projected-search iterates with Armijo steps satisfy .
- Theorem B (Active set + quasi-Wolfe): Under the same assumptions with quasi-Wolfe steps, , ensuring the projected gradient vanishes.
- Theorem C (Finite active-set identification): If is a nondegenerate stationary point, eventual identification of the true active set occurs, after which the method locally reduces to unconstrained quasi-Newton.
- Theorem D (Interior-point projected search): For log-barrier or primal-dual penalty subproblems , use of quasi-Armijo or quasi-Wolfe search yields . With suitable reduction of , optimal solutions for the bound-constrained problem are obtained.
A plausible implication is that these results ensure both global convergence and robust internal identification of critical variables, underpinning the superlinear efficiency observed in numerical experiments.
6. Classes of Projected-Search Methods
Two principal projected-search frameworks are established:
(a) Projected-search active-set methods
- Maintain a dynamic working set of nearly active constraints.
- Solve a reduced quasi-Newton subproblem, enforcing for .
- Direction is trimmed to point into the interior.
- The quasi-Wolfe search is performed along , followed by update of and the working set tolerance .
(b) Projected-search primal-dual interior-point methods
- Introduce augmented variables .
- Objective
- Newton direction is projected onto the fraction-to-boundary box, for .
- Quasi-Wolfe search is executed along , with subsequent updates of variables and barrier parameter.
This formalism yields an efficient and robust search in both active-set and interior-point contexts.
7. Computational Benchmarks
Empirical results demonstrate the superiority of the quasi-Wolfe path-following line search:
- On CUTEst problems (154 in total), UBOPT with quasi-Wolfe solved 148, UBOPT with quasi-Armijo solved 145, and classical L-BFGS-B solved 138.
- Function-evaluation performance profiles using log2 scale: UBOPT-qWolfe is best on ~60% of problems (ratio ≤ 1), UBOPT-qArmijo ~20%, L-BFGS-B ~15%.
- Function-evaluation reduction for quasi-Wolfe search is 2–3× on average over Armijo-only.
- Primal-dual interior-point code PDproj-qWolfe solved 128/137 box problems (≤1000 variables) within 500 iterations versus 112/137 for PD-Wolfe, with iteration and evaluation profiles indicating median speed-up ≈1.8× for PDproj-qWolfe.
This suggests that path-following quasi-Wolfe line search methods enhance both the reliability and computational efficiency of bound-constrained optimization, streamline active-set identification, and integrate seamlessly into modern active-set and interior-point optimization frameworks (Ferry et al., 2021).