Proximal ADMM with Successive Convex Approximation
- The paper demonstrates that employing the AM-bound surrogate within a successive convex approximation framework yields convergence guarantees and steadily decreasing objectives in non-convex optimization.
- The methodology replaces non-convex product terms with convex upper bounds derived from the arithmetic mean inequality, enabling tractable subproblems at every iteration.
- Application in quantum source placement shows that the method achieves lower entanglement loss and improved placement accuracy, typically converging in fewer than 20 iterations.
Successive convex approximation (SCA) for element placement is a principled methodology for solving non-convex optimization problems in which the objective or constraints contain products of positive functions of the placement variable. It achieves tractability through surrogate convex upper bounds constructed using arithmetic mean inequalities, enabling the use of @@@@2@@@@ at each iteration while preserving guarantees of convergence to stationary points. This approach has broad application within communication network design and, notably, quantum source positioning, where element (or source) coordinates must be optimized under challenging non-convex performance metrics (Qian et al., 2024).
1. Fundamental AM Upper Bound for Products
Let for , with representing a vector of placement or configuration parameters. The central object is the product
By the classical arithmetic–geometric mean inequality,
which yields the explicit convex upper bound
This (labeled the “AM-bound”) provides a closed-form global convex surrogate for the inherently non-convex term .
Consider and . If the functions are convex in , or more generally affine, then is convex. Since is convex and non-decreasing for , composition gives convexity of the surrogate. Thus, the AM-bound can be directly used to convexify multiplicative terms in optimization problems, facilitating efficient computation (Qian et al., 2024).
2. Successive Convex Approximation Methodology
The successive convex approximation framework leverages the AM-bound iteratively within a majorization–minimization scheme. Consider an optimization problem of the form: where is convex, each is positive and sufficiently regular, and is convex and compact.
For each multiplicative term, substitute the AM upper bound:
At SCA iteration , solve the convex surrogate problem:
Convergence to a stationary point is rigorously guaranteed under the assumptions that are positive smooth functions, is convex, and is convex and compact. The sequence of objective values is nonincreasing, and limit points satisfy first-order KKT conditions by standard SCA analysis (Qian et al., 2024).
3. Application in Quantum Source Placement
A representative use case is quantum source positioning as detailed in (Qian et al., 2024). Consider quantum nodes at locations , with the objective of placing a source , minimizing total link “loss”: Here indexes pairs of nodes, and the parameters are fixed.
The non-convexity comes from terms involving and products . Introduce auxiliary variables , recasting the non-convexity into the constraint: Apply the AM-bound: for a scalar parameter , chosen at iteration as
The right-hand variable is linearized via first-order approximation. The resulting convex surrogate is solved for , and iteration continues until convergence.
4. Properties and Theoretical Guarantees
The SCA/AM methodology retains global convexity at each iteration, ensuring efficient solvability by off-the-shelf convex solvers. Under the maintained assumptions—convex/affine , positive values, and compact feasible set—convergence to stationary points of the original problem is ensured. Each surrogate is tangent to the true objective at the current iterate and majorizes the non-convex objective elsewhere, guaranteeing monotonic decrease of the sequence (Qian et al., 2024).
5. Numerical Performance and Comparative Insights
In quantum source placement, empirical results demonstrate monotonic reduction of the objective at each SCA iteration, with typical convergence in less than 20 iterations. The final placement consistently achieves strictly smaller maximum pair-wise entanglement loss compared to SCA schemes based solely on first-order Taylor approximations, and at comparable per-iteration computational cost. Baseline comparisons with random placements and direct gradient descent show that the AM-bound SCA consistently achieves both lower objectives and greater placement accuracy (Qian et al., 2024).
6. Broader Contexts and Extensions
The approach generalizes to functions involving multiplicative or fractional structure across numerous communication network optimization scenarios, including resource allocation, power control, and transmission energy minimization, whenever product terms in objectives or constraints induce fundamental non-convexity. The method’s reliance on classical inequalities (AM, GM, QM, HM) renders it amenable to further extensions, including adaptive surrogate selection and broader classes of non-convexity. The flexibility of SCA—alternating between auxiliary parameter updates and convex subproblem solutions—underpins its effectiveness in high-dimensional element placement and related system design problems (Qian et al., 2024).