Papers
Topics
Authors
Recent
Search
2000 character limit reached

Alternating Direction Method of Multipliers

Updated 26 January 2026
  • ADMM is a class of operator splitting algorithms that decompose complex, constrained optimization problems into simpler subproblems for efficient resolution.
  • The classical formulation employs augmented Lagrangian techniques with iterative updates of primal and dual variables to guarantee convergence in both convex and nonconvex settings.
  • Advanced variants, including distributed, sensitivity-assisted, and adaptive methods, enhance convergence speed and robustness in diverse applications such as statistical learning and image science.

The Alternating Direction Method of Multipliers (ADMM) is a class of operator-splitting algorithms designed for large-scale linearly constrained optimization problems, both convex and nonconvex. ADMM combines decomposability with strong convergence guarantees and is foundational in distributed, parallel, and nonconvex optimization. Its modern incarnations underpin a variety of fields, including distributed control, statistical learning, image science, and inverse problems.

1. Classical ADMM: Formulation and Update Rules

ADMM targets constrained problems where the objective splits naturally into subproblems that can be optimized independently. The canonical template is

minx,zf(x)+g(z)subject toAx+Bz=c,\min_{x,z} \quad f(x) + g(z) \quad \text{subject to} \quad Ax + Bz = c,

where ff and gg are proper, closed (possibly nonsmooth) functions, and A,BA, B, and cc encode the linear constraints.

The augmented Lagrangian is

Lρ(x,z,y)=f(x)+g(z)+yT(Ax+Bzc)+ρ2Ax+Bzc2,L_\rho(x,z,y) = f(x) + g(z) + y^T(Ax + Bz - c) + \frac{\rho}{2}\|Ax + Bz - c\|^2,

with penalty parameter ρ>0\rho > 0 and dual multiplier yy.

The classical two-block ADMM iterates:

  • xx-update:

xk+1=argminxLρ(x,zk,yk)x^{k+1} = \arg\min_x L_\rho(x, z^k, y^k)

  • zz-update:

zk+1=argminzLρ(xk+1,z,yk)z^{k+1} = \arg\min_z L_\rho(x^{k+1}, z, y^k)

  • Dual update:

yk+1=yk+ρ(Axk+1+Bzk+1c)y^{k+1} = y^k + \rho(Ax^{k+1} + Bz^{k+1} - c)

This structure extends to problems with more than two block variables, where either grouped updates (super-blocks) or Gauss–Seidel (blockwise sweeps) are used (Hong et al., 2014).

2. Convergence Theory: Convex and Nonconvex Regimes

Convex Case

For convex ff and gg, the iterates converge to the primal–dual solution under mild regularity (closed, proper, convex objectives; Slater’s condition; primal–dual solution exists):

  • For standard ADMM and step size (dual update length) τ(0,φ)\tau \in (0, \varphi) (with φ\varphi the golden ratio), global convergence is assured. The convergence proof uses a Lyapunov function that telescopes with each iteration (Gu et al., 2020).
  • In special cases (one function linear, both quadratic with compatible spectra), τ\tau can be pushed to $2$ (Gu et al., 2020).
  • Ergodic (averaged iterate) convergence rate is O(1/k)O(1/k) in objective and constraint residuals for both classical and parallel/distributed variants (Liu et al., 2021, Zhong et al., 2013).

Nonconvex Case

In nonconvex consensus and sharing problems of the form

mink=1Kgk(xk)+h(x0)s.t. xkx0=0,\min \sum_{k=1}^K g_k(x_k) + h(x_0) \quad \text{s.t. } x_k - x_0 = 0,

or

mink=1Kgk(xk)+(k=1KAkxk)s.t. xkXk,\min \sum_{k=1}^K g_k(x_k) + \ell\left(\sum_{k=1}^K A_k x_k\right) \quad \text{s.t. } x_k \in X_k,

with certain regularity (Lipschitz gradients for gk,g_k, \ell; convex constraint sets; hh convex), ADMM converges to stationary points if the penalty parameter ρ\rho is sufficiently large so that each block subproblem is strongly convex and ρ\rho \cdot (strong-convexity modulus) >2Lk2> 2L_k^2 (with each LkL_k the block Lipschitz constant) (Hong et al., 2014). No a priori boundedness is required for the iterates. For the sharing form, convergence holds regardless of block count.

Three-Block and Generalized Settings

With nonconvex, nonsmooth proximal terms or multi-block decompositions (e.g., in background/foreground extraction), global sequence convergence to stationary points is established for dual stepsizes below the golden ratio and sufficiently large penalties, provided a KL-property is satisfied for the constructed potential function (Yang et al., 2015).

3. Advanced Variants: Distributed, Parallel, and Adaptive ADMM

Distributed and Parallel ADMM

ADMM can be fully distributed over a network graph, where each agent activates in parallel with only local and neighboring communications. For consensus optimization,

minx1,,xnfi(xi)s.t. xi=xj,(i,j)E\min_{x_1,\dots,x_n} \sum f_i(x_i) \quad \text{s.t. } x_i = x_j,\, \forall (i,j)\in \mathcal{E}

the distributed parallel ADMM algorithm assigns dual variables per edge and performs parallel proximal updates for each local agent, with an extra proximal term for stability. Convergence is O(1/k) under convexity and connectivity (Liu et al., 2021). Practical guidelines choose proximal regularization parameter T3T \ge 3 for strong convexity.

Sensitivity-Assisted and Inertial ADMM

To reduce local NLP subproblem solve time, sensitivity-assisted ADMM uses first-order parametric sensitivity (Jacobians of the KKT point mapping) to compute a tangential predictor step. With appropriate thresholds, this yields orders-of-magnitude per-iteration speedup with asymptotic convergence to approximate KKT points of the global problem (Krishnamoorthy et al., 2020).

Inertial ADMMs incorporate momentum-type updates, leveraging inertial Douglas–Rachford splitting, and enable acceleration (reduced iteration counts) without sacrificing convergence, under bounded parameters (α<1\alpha < 1), and with possible strong convergence if strong convexity is present (Yang et al., 2020).

Adaptive and Variable Step-Size ADMM

Adaptive ADMM variants decouple penalty parameters or generalize step sizes per block/subproblem, tuning them based on convexity/strong convexity constants to maximize contraction per step (Bartz et al., 2021). Monotonicity of residuals and convergence is robust to variable and nonincreasing penalty sequences, with sublinear rate O(1/k)O(1/k), or linear with further strong convexity (Bartels et al., 2017).

4. Non-Euclidean and Reweighted Extensions

ADMM has been generalized to Bregman-ADMM (BADMM), replacing the quadratic penalty with general Bregman divergences, improving iteration-complexity constants from O(n) (Euclidean) to O(log n) in e.g. simplex-structured problems (KL divergence). BADMM is particularly advantageous for large-scale, parallel optimal transport and network-flow-type problems (Wang et al., 2013). Variants with carefully chosen Bregman kernels (e.g. half-shrinkage for nonconvex regularizers) admit convergence under sub-analytic/Kurdyka–Łojasiewicz regimes (Wang et al., 2014).

For nonconvex problems with separable composite structures f(x)+ig(h(yi))f(x)+\sum_i g(h(y_i)), an Iteratively Linearized Reweighted ADMM replaces the nonconvex g(h(y))g(h(y)) with a linear surrogate at each iterate, convexifying subproblems and achieving global convergence via KL-property arguments (Sun et al., 2017).

5. Theoretical Analysis: Lyapunov, ODE, and Performance Estimation Frameworks

High-resolution ODE analysis clarifies that the continuous-time limit of ADMM incorporates a Lagrangian correction off the constraint manifold (the "λ-correction"), with rate-optimal O(1/N) convergence in the strongly convex case; the numerical error construction in implicit-discretization is crucial for damped monotonicity (Li et al., 2024). Such ODE insights guide step-size/penalty parameter selection and point to potential momentum-acceleration schemes.

Performance estimation frameworks show that, in general convex settings, the theoretical global convergence of ADMM can be guaranteed for dual step sizes less than the golden ratio (φ\varphi), but cannot be extended beyond without loss of Lyapunov monotonicity—unless additional problem structure is present (e.g., linear or quadratic blocks) (Gu et al., 2020). Explicit counterexamples have been constructed where monotonic decrease of the classical Lyapunov fails for larger step sizes.

6. Practical Implementation Guidelines and Applications

Penalty Parameter and Step Size Selection

  • Convex case: For each block k, select ρk\rho_k so the subproblem is strongly convex, ρkstrong-convexity>2Lk2\rho_k \cdot \text{strong-convexity} > 2L_k^2 and ρkLk\rho_k \ge L_k (Hong et al., 2014).
  • Nonconvex or multi-block: Larger (tens–hundreds × Lipschitz constant) penalties stabilize updates and guarantee convergence.
  • Adaptive/variable step sizes should be nonincreasing unless reinitialized to enforce contraction (Bartels et al., 2017).

Application Domains

  • Distributed power system dispatch, where ADMM is run on networked agents without central coordination; convergence is rapid (\sim ms) for large-scale IEEE benchmark systems (Wasti et al., 2020).
  • Inverse problems in Hilbert spaces (deblurring, deconvolution): ADMM separates into linear and proximal substeps, with simple noise-aware stopping rules supporting regularization (Jiao et al., 2016).
  • Polynomial optimization (POPs): Variable-splitting and block-separable ADMM enable decentralized, parallelizable local optimization for large-scale POPs (Cerone et al., 3 Feb 2025).
  • Weighted Fused LASSO: Specialized ADMM significantly reduces per-iteration complexity over path-following algorithms (from O(p⁴) to O(p²)) for large-scale generalized LASSO settings (Dijkstra et al., 2024).

7. Algorithmic Equivalence and Formulation Choice

ADMM admits numerous equivalent "ADM-ready" formulations per problem (primal, dual, saddle-point, update order, number/type of variables). Under broad assumptions, most variants are provably equivalent up to affine change of variables, except in three-block or special nonconvex/nonsmooth settings. Practical guideline: select a formulation for which the subproblems admit efficient closed-form or well-conditioned updates (Yan et al., 2014).


References:

  • "Convergence Analysis of Alternating Direction Method of Multipliers for a Family of Nonconvex Problems" (Hong et al., 2014)
  • "A Distributed Parallel Optimization Algorithm via Alternating Direction Method of Multipliers" (Liu et al., 2021)
  • "On the dual step length of the alternating direction method of multipliers" (Gu et al., 2020)
  • "Alternating Direction Method of Multipliers for Linear Inverse Problems" (Jiao et al., 2016)
  • "Iteratively Linearized Reweighted Alternating Direction Method of Multipliers for a Class of Nonconvex Problems" (Sun et al., 2017)
  • "Bregman Alternating Direction Method of Multipliers" (Wang et al., 2013)
  • "Convergence of Bregman alternating direction method with multipliers for nonconvex composite problems" (Wang et al., 2014)
  • "Fast Stochastic Alternating Direction Method of Multipliers" (Zhong et al., 2013)
  • "Adaptive Stochastic Alternating Direction Method of Multipliers" (Zhao et al., 2013)
  • "Self Equivalence of the Alternating Direction Method of Multipliers" (Yan et al., 2014)
  • "Alternating direction method of multipliers with variable step sizes" (Bartels et al., 2017)
  • "Distributed Dynamic Economic Dispatch using Alternating Direction Method of Multipliers" (Wasti et al., 2020)
  • "Alternating direction method of multipliers for convex programming: a lift-and-permute scheme" (Li et al., 2022)
  • "An Adaptive Alternating Direction Method of Multipliers" (Bartz et al., 2021)
  • "Alternating direction method of multipliers for polynomial optimization" (Cerone et al., 3 Feb 2025)
  • "An Alternating Direction Method of Multipliers Algorithm for the Weighted Fused LASSO Signal Approximator" (Dijkstra et al., 2024)
  • "Understanding the ADMM Algorithm via High-Resolution Differential Equations" (Li et al., 2024)
  • "Alternating Direction Method of Multipliers for A Class of Nonconvex and Nonsmooth Problems with Applications to Background/Foreground Extraction" (Yang et al., 2015)
  • "Sensitivity Assisted Alternating Directions Method of Multipliers for Distributed Optimization and Statistical Learning" (Krishnamoorthy et al., 2020)
  • "An inertial alternating direction method of multipliers for solving a two-block separable convex minimization problem" (Yang et al., 2020)
Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Alternating Direction Method of Multipliers Algorithm.