WGMV Primal-Dual Algorithm
- The WGMV primal–dual algorithm is an operator splitting method for composite convex optimization and saddle-point problems with weakly convex structure.
- It employs a modified duality gap and inf‐sharp error bounds to ensure local linear convergence even in nonconvex settings.
- Numerical experiments in large-scale ℓ1-regularization and image deblurring demonstrate its practical effectiveness and robustness.
The WGMV (Weakly convex–Gapped–Modified‐Variational) primal–dual algorithm is an operator splitting method for composite convex optimization and saddle-point problems in Hilbert spaces, targeting models with weakly convex (possibly nonsmooth) structure in the primal component. Unlike classical schemes, WGMV achieves local linear convergence rates under sharpness of a modified duality gap even when the objective is nonconvex. The methodology is rooted in recent advances on proximal subdifferentials, inf-sharp error bounds, and alternate definitions of duality gap, broadening the applicability of primal-dual hybrid gradient methods well beyond the standard convex–concave setting (Bednarczuk et al., 2024).
1. Problem Formulation and Mathematical Framework
WGMV operates in real Hilbert spaces and , equipped with standard inner products. The setting involves composite minimization: where:
- is proper, lower-semicontinuous, and -weakly convex,
- is proper, lower-semicontinuous, and convex,
- is a bounded linear operator.
The associated saddle-point (Lagrangian) problem is: where is the convex conjugate of (Bednarczuk et al., 2024).
2. Weak Convexity and Proximal Subgradients
A function is -weakly convex if: for all , . This is equivalent to being convex.
For weakly convex , the global proximal subdifferential at is: This coincides with the Clarke subdifferential, guaranteeing nonemptiness and facilitating the subsequent algorithmic steps (Bednarczuk et al., 2024).
3. Modified Gap Function and Inf-Sharpness
In contrast to the standard duality gap, the WGMV algorithm leverages a modified gap function: where is the set of saddle points. Inf-sharpness is defined by the existence of such that: This gap vanishes exactly on and provides a local error-bound type property necessary for linear convergence analysis (Bednarczuk et al., 2024).
4. Algorithm Structure and Convergence Guarantees
The “dual-first” WGMV primal–dual iteration is: with parameters , subject to , , and the sharpness-related constraint . The optimality conditions are written via proximal subgradients: Under inf-sharpness, geometric (linear) convergence of the distance to the saddle set is obtained within a neighborhood of attraction (Bednarczuk et al., 2024). The radius of convergence depends on problem and step size parameters.
5. Relationship to Classical Convex-Concave Algorithms
In convex–concave scenarios, methods such as Chambolle–Pock exhibit at most sublinear or convergence for standard ergodic duality gaps. WGMV extends this to yield local linear convergence even when is only weakly convex and sharpness is satisfied. The sharpness of only the primal function suffices for local convergence of the primal iterates ; the rate (linear/sublinear) is governed by the decay speed of the dual residual (Bednarczuk et al., 2024).
6. Practical Aspects and Numerical Performance
Experiments detail the behavior in several settings:
- Synthetic scalar models with unique saddle points demonstrated linear convergence within the convergence radius, divergence otherwise.
- Large-scale -regularization (, ) compared the standard convex and weakly convex variants; the latter converged faster near the solution and exhibited greater noise robustness.
- Image deblurring and total-variation denoising on benchmarks (BSD68) indicated that the weakly convex formulation provided sharper reconstructions and higher PSNR, especially at moderate and high noise.
Closed-form proximal updates are available for many weakly convex penalties, e.g., . The constants governing sharpness and the convergence radius are not known a priori, so practical deployment relies on heuristic step size tuning and monitoring the decay of and (Bednarczuk et al., 2024).
7. Limitations and Directions for Further Research
The theory establishes only local convergence—initialization outside the identified basin of attraction may lead to convergence to extraneous critical points or divergence. Extensions include adaptive step size selection (e.g., via line search), block-coordinate or stochastic variants, and schemes for more general nonconvex–nonconcave saddle-point problems under two-block error-bound (sharpness) assumptions.
The WGMV primal–dual algorithm integrates proximal-splitting methodology with a modified duality gap and sharpness-based error bounds, extending the applicability of primal–dual approaches to weakly convex problems with provable local linear rates of convergence (Bednarczuk et al., 2024).