Papers
Topics
Authors
Recent
Search
2000 character limit reached

Disciplined Convex–Concave Programming

Updated 17 January 2026
  • Disciplined Convex–Concave Programming (DCCP) is a framework that models nonconvex problems as the difference of convex functions.
  • It employs iterative linearization of concave components via the convex–concave procedure to convert nonconvex constraints into tractable convex subproblems.
  • DCCP is implemented in Python, MATLAB, and Julia, enabling applications in sparse recovery, risk-averse control, and neural network training.

Disciplined Convex–Concave Programming (DCCP) is a computational and modeling framework that generalizes disciplined convex programming (DCP) to encompass nonconvex problems with difference-of-convex (DC) structure. DCCP presents a systematic, grammar-driven approach for formulating and heuristically solving nonconvex programs by leveraging compositions of convex and concave atomic functions while preserving key algorithmic properties and automatic conversion to standard convex solver interfaces. The paradigm is realized in practical toolchains for Python, MATLAB, and Julia, most notably through extensions of CVXPY.

1. Problem Classes and Mathematical Formulation

DCCP targets difference-of-convex programming, where the objective and constraint functions decompose as the difference of two proper, closed, convex functions. The canonical form is

$\begin{array}{ll} \minimize_{x\in\R^n} & f_0(x) - g_0(x) \ \subjectto & f_i(x) - g_i(x) \le 0,\quad i=1,\dots,m, \end{array}$

where fi,gi:RnR{+}f_i, g_i:\R^n\to\R\cup\{+\infty\} are convex, proper, and closed—typically differentiable on the interior of their domain (Shen et al., 2016).

When all gig_i are affine, the model reduces to convex programming; otherwise, the problem is generically nonconvex, and global optimization is generally intractable. Many practical problems—including Boolean quadratic programming, 1/2\ell_{1/2} recovery, constrained risk-averse MDPs, and training of neural models with morphological or lattice-based activations—admit reformulation in DC structure (Shen et al., 2016, Cunha et al., 2024, Ahmadi et al., 2020).

2. Disciplined Convex Programming (DCP) and Its Extensions

The DCP paradigm provides a structured set of grammar rules for problem construction. Atomic functions are tagged as convex, concave, or affine, and recursive composition rules enforce curvature preservation. Specifically:

  • If ff is convex and nondecreasing in argument ii, the composed argument must be convex; if nonincreasing, it must be concave, etc.
  • All problem objects (objective, constraints) are required to be DCP-compliant expressions (Shen et al., 2016).

DCCP extends DCP by permitting both convex and concave expressions in objectives and constraints, as long as each term is individually DCP-conformant. The nonconvexity is introduced only at the level of combining these through difference-of-convex structures (Juditsky et al., 2021). This allows the encoding and automated convexification of many nonconvex programs while preserving DCP's automatic rewriting to cone programs and compatibility with generic solvers.

3. Solution Strategy: Convex–Concave Procedure (CCP) and Algorithmic Enhancements

DCCP solves a DC-formulated problem by iterated majorization-minimization, known as the convex–concave procedure (CCP):

  1. At iteration kk, all concave terms gig_i in objective and constraints are linearized via first-order Taylor expansions around xkx_k:

g^i(x;xk)=gi(xk)+gi(xk)T(xxk)IDi(x)\hat{g}_i(x;x_k) = g_i(x_k) + \nabla g_i(x_k)^T(x-x_k) - \mathbb{I}_{\mathcal{D}_i}(x)

where IDi\mathbb{I}_{\mathcal{D}_i} is the indicator of the domain of gig_i (Shen et al., 2016).

  1. The resulting problem is convex and DCP-compliant; it is solved to yield xk+1x_{k+1}.
  2. If no subgradient exists (as can occur at the boundary of domains), damped updates ensure that iterates remain feasible:

xk+1=αx^k+(1α)xk,  0<α<1x_{k+1} = \alpha \hat{x}_k + (1-\alpha) x_k,\; 0<\alpha<1

  1. Iterations proceed, increasing a penalty parameter for slack variables (to enforce feasibility) until stationarity or another stopping criterion is met.

This algorithm monotonically decreases objective value and generically converges to a stationary (KKT) point, but offers no global optimality certificate due to possible nonconvex landscapes (Shen et al., 2016).

Key domain-specific improvements of DCCP over classical CCP include:

  • Strict handling of variable domains via indicators, ensuring all iterates are valid for potentially partial-domain concave atoms.
  • Robustness against boundary nondifferentiability using damped updates.

4. Implementation in Software and Modeling Workflows

The DCCP procedure is implemented as a high-level extension to CVXPY, with core API components:

  • is_dccp(problem) for syntax checking compliance.
  • expression.domain and expression.gradient to extract DCP-representable domains and atom gradients.
  • linearize(expr) for domain-aware linearization.
  • Problem.solve(method='dccp') to invoke the penalty CCP algorithm (Shen et al., 2016).

A typical workflow involves problem declaration in Python using DCP-compliant atomic functions, after which the DCCP solver is called without manual transformation of the problem. All heavy lifting—linearization, domain handling, conic conversion, iterative procedure—is automated.

The modeling abstraction allows DCCP to be applied directly in a variety of scientific and engineering optimization tasks, including DC-recast learning problems, sparsity-promoting reconstruction, and saddle-point reformulations (Shen et al., 2016, Cunha et al., 2024, Ahmadi et al., 2020, Juditsky et al., 2021).

5. Representative Applications

DCCP has been applied and empirically benchmarked across diverse domains:

  • Boolean least squares: Maximum-likelihood estimation of binary signals under linear mixing and Gaussian noise. The DCCP framework enables direct solution of the nonconvex quadratic equality constraints, and numeric experiments show close agreement to globally optimal solutions in moderate dimension (Shen et al., 2016).
  • 1/2\ell_{1/2} sparse recovery: Nonconvex sparsity regularization with a concave penalty. DCCP recovers sparse signals more reliably than the convex 1\ell_1 heuristic (Shen et al., 2016).
  • Risk-averse constrained Markov decision processes: DC reformulation of coherent-risk Bellman recursions, tractably handled by DCCP. Cases such as CVaR and EVaR risk measures result in DC-programs handled transparently through the DCCP modeling interface (Ahmadi et al., 2020).
  • Training of morphological perceptrons: Nonconvex constraints arising in single-layer and multi-dendrite morphological neural models are naturally represented and solved via DCCP, with weighted variants (WDCCP) used to penalize outliers (Cunha et al., 2024). This enables the construction and training of nonlinear classifiers with piecewise-hyperbox decision boundaries.

6. Mathematical Extensions and Generalizations

Recent work generalizes DCCP methodologies to encompass structured convex–concave saddle-point problems and variational inequalities with monotone operators. The "K-conic" representation (involving a finite family of regular cones) enables algorithmic reduction of well-structured convex–concave programs to standard conic forms, solvable by off-the-shelf conic solvers such as MOSEK or SDPT3 (Juditsky et al., 2021). The framework supports automatic recognition and translation of a broad class of problems—including Fenchel conjugates, robust optimization, and monotone variational inequalities—into unified conic programs by exploiting the closure properties of K-representable sets and functions and an explicit calculus for composition and combination.

7. Limitations and Research Directions

DCCP yields convergence only to local stationary points, with no global optimality guarantees. The method is initialization-sensitive; highly nonconvex objectives may trap the procedure in poor local minima. Open research lines include:

  • Improved initialization and multi-start strategies.
  • Incorporation of second-order information in subproblem linearization.
  • Adaptive penalty and damping schedules.
  • Hybrid heuristics combining global and local search.
  • Complexity analyses and randomized procedures to enhance convergence properties.
  • Integration with mixed-integer optimization and large-scale stochastic settings (Shen et al., 2016, Cunha et al., 2024).

8. Summary of Framework and Impact

Disciplined Convex–Concave Programming unifies the modeling transparency and structure of DCP with the flexibility of convex–concave decomposition. It enables a range of nonconvex problems to be heuristically addressed within mature convex-optimization toolchains, facilitating research and development in machine learning, signal processing, statistics, risk-aware control, and beyond. The algorithmic, software, and modeling innovations of DCCP are realized in open-source packages and are actively deployed in current research literature (Shen et al., 2016, Cunha et al., 2024, Ahmadi et al., 2020, Juditsky et al., 2021).

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Disciplined Convex–Concave Programming (DCCP).