Papers
Topics
Authors
Recent
Search
2000 character limit reached

Proximal ADMM with Successive Convex Approximation

Updated 18 January 2026
  • The paper demonstrates that employing the AM-bound surrogate within a successive convex approximation framework yields convergence guarantees and steadily decreasing objectives in non-convex optimization.
  • The methodology replaces non-convex product terms with convex upper bounds derived from the arithmetic mean inequality, enabling tractable subproblems at every iteration.
  • Application in quantum source placement shows that the method achieves lower entanglement loss and improved placement accuracy, typically converging in fewer than 20 iterations.

Successive convex approximation (SCA) for element placement is a principled methodology for solving non-convex optimization problems in which the objective or constraints contain products of positive functions of the placement variable. It achieves tractability through surrogate convex upper bounds constructed using arithmetic mean inequalities, enabling the use of @@@@2@@@@ at each iteration while preserving guarantees of convergence to stationary points. This approach has broad application within communication network design and, notably, quantum source positioning, where element (or source) coordinates must be optimized under challenging non-convex performance metrics (Qian et al., 2024).

1. Fundamental AM Upper Bound for Products

Let fi(x)>0f_i(x) > 0 for i=1,,Ni = 1, \dots, N, with xx representing a vector of placement or configuration parameters. The central object is the product

P(x)=i=1Nfi(x).P(x) = \prod_{i=1}^N f_i(x).

By the classical arithmetic–geometric mean inequality,

i=1Nfi(x)N1Ni=1Nfi(x),\sqrt[N]{\prod_{i=1}^N f_i(x)} \le \frac{1}{N} \sum_{i=1}^N f_i(x),

which yields the explicit convex upper bound

P(x)(1Ni=1Nfi(x))N.P(x) \le \left( \frac{1}{N} \sum_{i=1}^N f_i(x) \right)^N.

This (labeled the “AM-bound”) provides a closed-form global convex surrogate for the inherently non-convex term P(x)P(x).

Consider S(x):=1Ni=1Nfi(x)S(x) := \frac{1}{N} \sum_{i=1}^N f_i(x) and h(t)=tNh(t) = t^N. If the functions fi(x)f_i(x) are convex in xx, or more generally affine, then S(x)S(x) is convex. Since h(t)h(t) is convex and non-decreasing for t0t \ge 0, composition gives convexity of the surrogate. Thus, the AM-bound can be directly used to convexify multiplicative terms in optimization problems, facilitating efficient computation (Qian et al., 2024).

2. Successive Convex Approximation Methodology

The successive convex approximation framework leverages the AM-bound iteratively within a majorization–minimization scheme. Consider an optimization problem of the form: minxX  J(x)+n=1Mi=1Nnfn,i(x),\min_{x \in \mathcal{X}} \; J(x) + \sum_{n=1}^M \prod_{i=1}^{N_n} f_{n,i}(x), where J(x)J(x) is convex, each fn,i(x)f_{n,i}(x) is positive and sufficiently regular, and X\mathcal{X} is convex and compact.

For each multiplicative term, substitute the AM upper bound: i=1Nnfn,i(x)(1Nni=1Nnfn,i(x))NnFnAM(x).\prod_{i=1}^{N_n} f_{n,i}(x) \le \left( \frac{1}{N_n} \sum_{i=1}^{N_n} f_{n,i}(x) \right)^{N_n} \equiv F_n^{\rm AM}(x).

At SCA iteration kk, solve the convex surrogate problem: x(k+1)=argminxXJ(x)+n=1MFnAM(x).x^{(k+1)} = \arg\min_{x \in \mathcal{X}} J(x) + \sum_{n=1}^M F_n^{\rm AM}(x).

Convergence to a stationary point is rigorously guaranteed under the assumptions that {fn,i(x)}\{ f_{n,i}(x) \} are positive smooth functions, (Nn1ifn,i(x))Nn(N_n^{-1} \sum_i f_{n,i}(x))^{N_n} is convex, and X\mathcal{X} is convex and compact. The sequence of objective values is nonincreasing, and limit points satisfy first-order KKT conditions by standard SCA analysis (Qian et al., 2024).

3. Application in Quantum Source Placement

A representative use case is quantum source positioning as detailed in (Qian et al., 2024). Consider NN quantum nodes at locations {un(q)}n=1N\{u_n^{(q)}\}_{n=1}^N, with the objective of placing a source qR2q \in \mathbb{R}^2, minimizing total link “loss”: minqR2m=1Mαm110η10(qunm+qunm)10βqunmqunm.\min_{q \in \mathbb{R}^2} \sum_{m=1}^M \alpha_m^{-1} 10^{\frac{\eta}{10} ( \|q-u_{n_m}\| + \|q-u_{n'_m}\| ) } 10^{\beta |\|q-u_{n_m}\| - \|q-u_{n'_m}\|| }. Here mm indexes pairs of nodes, and the parameters αm,η,β\alpha_m, \eta, \beta are fixed.

The non-convexity comes from terms involving qunmqunm|\|q-u_{n_m}\| - \|q-u_{n'_m}\|| and products qunqun\|q-u_{n}\| \cdot \|q-u_{n'}\|. Introduce auxiliary variables rmqunqunr_m \ge |\|q-u_n\| - \|q-u_{n'}\||, recasting the non-convexity into the constraint: qun2+qun22qunqunrm2.\|q-u_n\|^2 + \|q-u_{n'}\|^2 - 2 \|q-u_n\| \|q-u_{n'}\| \le r_m^2. Apply the AM-bound: qunqun12(qun2ym+qun2/ym)\|q-u_n\| \|q-u_{n'}\| \le \frac{1}{2} ( \|q-u_n\|^2 y_m + \|q-u_{n'}\|^2 / y_m ) for a scalar parameter ymy_m, chosen at iteration kk as

ym(k)=12q(k)unq(k)un.y_m^{(k)} = \frac{1}{2} \frac{ \| q^{(k)} - u_{n'} \| }{ \| q^{(k)} - u_n \| }.

The right-hand variable rm2r_m^2 is linearized via first-order approximation. The resulting convex surrogate is solved for (q,r)(q, r), and iteration continues until convergence.

4. Properties and Theoretical Guarantees

The SCA/AM methodology retains global convexity at each iteration, ensuring efficient solvability by off-the-shelf convex solvers. Under the maintained assumptions—convex/affine fi(x)f_i(x), positive values, and compact feasible set—convergence to stationary points of the original problem is ensured. Each surrogate is tangent to the true objective at the current iterate and majorizes the non-convex objective elsewhere, guaranteeing monotonic decrease of the sequence {G(x(k);x(k))}\{ G(x^{(k)}; x^{(k)}) \} (Qian et al., 2024).

5. Numerical Performance and Comparative Insights

In quantum source placement, empirical results demonstrate monotonic reduction of the objective at each SCA iteration, with typical convergence in less than 20 iterations. The final placement consistently achieves strictly smaller maximum pair-wise entanglement loss compared to SCA schemes based solely on first-order Taylor approximations, and at comparable per-iteration computational cost. Baseline comparisons with random placements and direct gradient descent show that the AM-bound SCA consistently achieves both lower objectives and greater placement accuracy (Qian et al., 2024).

6. Broader Contexts and Extensions

The approach generalizes to functions involving multiplicative or fractional structure across numerous communication network optimization scenarios, including resource allocation, power control, and transmission energy minimization, whenever product terms in objectives or constraints induce fundamental non-convexity. The method’s reliance on classical inequalities (AM, GM, QM, HM) renders it amenable to further extensions, including adaptive surrogate selection and broader classes of non-convexity. The flexibility of SCA—alternating between auxiliary parameter updates and convex subproblem solutions—underpins its effectiveness in high-dimensional element placement and related system design problems (Qian et al., 2024).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Partially Proximal Alternating Direction Method of Multipliers.