Papers
Topics
Authors
Recent
Search
2000 character limit reached

Strongly Log-Concave Priors

Updated 16 January 2026
  • Strongly log-concave priors are probability measures whose negative log-density is uniformly strongly convex, ensuring powerful concentration and functional inequalities.
  • They underpin enhanced Bayesian Cramér–Rao bounds and provide geometric ergodicity for MCMC algorithms, leading to rapid mixing in high-dimensional settings.
  • Their closure under linear transformations, products, and convolutions makes them versatile for both continuous and combinatorial models in advanced inference.

A strongly log-concave prior is a probability measure on Rd\mathbb{R}^d (or a discrete space) whose (negative) log-density is uniformly strongly convex: for a continuous prior with density p(x)=eV(x)p(x) = e^{-V(x)}, this means the Hessian 2V(x)KId\nabla^2 V(x) \succeq K I_d for all xx and some K>0K > 0. Strong log-concavity is a strengthening of the usual log-concavity condition (where VV is merely convex) and it guarantees powerful functional inequalities, concentration properties, and algorithmic benefits for Bayesian inference and generative modeling.

1. Mathematical Characterizations and Equivalent Conditions

In the continuous setting, a prior pSLC1(σ2,d)p \in SLC_1(\sigma^2, d) if logp(x)-\log p(x) is strongly convex with parameter 1/σ21/\sigma^2, i.e.,

2(logp)(x)1σ2I,xRd.\nabla^2(-\log p)(x) \succeq \frac{1}{\sigma^2} I, \quad \forall x \in \mathbb{R}^d.

More generally, for covariance Σ\Sigma, pSLC2(μ,Σ,d)p \in SLC_2(\mu, \Sigma, d) if

2(logp)(x)Σ1.\nabla^2(-\log p)(x) \succeq \Sigma^{-1}.

Discrete analogs apply, especially for priors on subset selection or combinatorial supports. For an rr-homogeneous prior over the Boolean lattice with generating polynomial gπ(x)g_\pi(x), the strong log-concavity condition is that all partial derivatives of gπg_\pi are log-concave at x=1x = 1 (Saumard et al., 2014, Cryan et al., 2019).

Equivalent conditions include:

  • Strong monotonicity of the “relative-score” map.
  • The midpoint super-Gaussian inequality,
  • Preservation under affine transformations, products, and convolutions.

2. Functional Inequalities and Concentration Properties

Strongly log-concave priors satisfy powerful concentration results. For pp with 2(logp)(x)cI\nabla^2(-\log p)(x) \succeq c I, one obtains sub-Gaussian tails and sharp L2L^2 bounds: Varp(f)1cEpf2,Entp(f2)1cEpf2,\mathrm{Var}_p(f) \le \frac{1}{c} \mathbb{E}_p \|\nabla f\|^2, \quad \mathrm{Ent}_p(f^2) \le \frac{1}{c} \mathbb{E}_p \|\nabla f\|^2, yielding a spectral gap and a log-Sobolev inequality with constant cc (Saumard et al., 2014).

For discrete rr-homogeneous SLC priors, a modified log-Sobolev inequality holds for the bases-exchange Markov chain, with constant ρ1/r\rho \ge 1/r. Concentration of Lipschitz observables follows: Pr[fE[f]a]2exp(a22rc2)\Pr[|f - \mathbb{E}[f]| \ge a] \le 2 \exp\left(-\frac{a^2}{2 r c^2}\right) where cc is the Lipschitz constant (Cryan et al., 2019).

3. Implications for Bayesian Cramér–Rao Bounds

Strongly log-concave priors enable sharper, dimension-explicit Bayesian Cramér–Rao bounds. For prior density π(θ)=eV(θ)\pi(\theta) = e^{-V(\theta)} with HessVKI\mathrm{Hess}\, V \succeq K I,

E(θθ^)2c1IE(\theta - \hat\theta)^2 \geq c \frac{1}{I}

for c0.54c \approx 0.54 in the scalar case and II the Fisher information of the likelihood. Crucially, the bound is independent of the prior's Fisher information J(π)\mathcal{J}(\pi), replacing it with the convexity constant KK (Aras et al., 2019). This enables robust minimum variance guarantees for any estimator, biased or not, and removes technical regularity requirements that typically restrict the classical van Trees inequality.

4. Sampling Algorithms and MCMC Mixing

Strong log-concavity ensures geometric ergodicity and rapid mixing for likelihood-targeted MCMC algorithms, notably overdamped Langevin dynamics and Metropolis-adjusted Langevin algorithms (MALA). Mixing times scale polynomially with dimension, and explicit bounds in terms of the convexity parameter (cc) can be given: TV(pt,p)O(ect),TV(p_t, p^*) \leq O(e^{-c t}), where ptp_t is the MCMC iterate and pp^* the stationary SLC prior (Saumard et al., 2014, Guth et al., 2023).

Sampling from SLC priors in multiscale generative models is further enabled via conditionally strongly log-concave (CSLC) decompositions, where hierarchical orthogonal projectors factor the data into blocks with their own SLC guarantees. Efficient MALA in each block leads to exponential convergence in KL divergence at rate O(djβj/αj)O(\sqrt{d_j} \beta_j / \alpha_j) per block (Guth et al., 2023).

5. Stein Discrepancy, Integration Error, and Quality Diagnostic

For C4C^4 SLC densities pp, one can control Wasserstein distances and smooth function errors using quantitative Stein factor bounds. For test function hh,

supxuh(x)2mM1(h)\sup_{x}\|\nabla u_h(x)\| \leq \frac{2}{m} M_1(h)

with mm the strong log-concavity parameter. This enables explicit computation of Stein discrepancy and tight certification of cubature/sampling error: S(Q,p)=supuUEQ[Apu]S(Q, p) = \sup_{u \in \mathcal{U}} | \mathbb{E}_Q [\mathcal{A}_p u] | where U\mathcal{U} is defined by Stein-factor bounds. Small S(Q,p)S(Q, p) guarantees small Wasserstein and bounded-Lipschitz discrepancies (Mackey et al., 2015).

6. Closure Properties and Canonical Examples

SLC priors are preserved under:

  • Linear transformations: Covariance transforms accordingly,
  • Products: Joint SLC parameter is block-diagonal,
  • Convolutions: SLC parameter adds,
  • Marginalization: SLC covariances retain block structure (Saumard et al., 2014).

Canonical examples: | Density Model | SLC Condition | Hessian Lower Bound | |---------------------------------------|-----------------------------------|-----------------------------| | Multivariate Gaussian | p(x)exp(12(xμ)Σ1(xμ))p(x) \propto \exp(-\frac{1}{2}(x-\mu)^\top \Sigma^{-1}(x-\mu)) | Σ1\Sigma^{-1} | | Subbotin, r=2r=2 (Gaussian) | p(x)ex2/2p(x) \propto e^{-|x|^2/2} | $1$ | | Weibull, β2\beta \ge 2 | fβ(x)=βxβ1exβf_\beta(x) = \beta x^{\beta-1} e^{-x^\beta} | β(β1)xβ2\beta(\beta-1)x^{\beta-2} | | Brownian bridge supremum | f(x)=4xe2x2f(x) = 4x e^{-2x^2}, x>0x > 0 | 4+1/x244 + 1/x^2 \ge 4 |

Log-concave densities such as the logistic fail to be SLC due to vanishing Hessian lower bounds (Saumard et al., 2014).

7. Practical Verification and Checklist

To verify SLC for a candidate prior p(x)p(x):

  1. Compute V(x)=logp(x)V(x) = -\log p(x).
  2. Check V(x)C2(Rd)V(x) \in C^2(\mathbb{R}^d).
  3. Form 2V(x)\nabla^2 V(x).
  4. Find λ=infxλmin[2V(x)]\lambda_\star = \inf_{x} \lambda_{\min}[ \nabla^2 V(x) ].
  5. If λ>0\lambda_\star > 0, pp is SLC with parameter λ1\lambda_\star^{-1} (Saumard et al., 2014).

For combinatorial or matroid priors, check the strong log-concavity of the generating polynomial and its directional derivatives (Cryan et al., 2019).


In summary, strongly log-concave priors unify variational, concentration, and sampling theoretical tools in Bayesian inference, offering robust guarantees for estimator variance, mixing time, and error control. Their multidimensional and combinatorial generalizations enable concrete, theory-backed solutions even in high-dimensional and discrete model-selection contexts (Aras et al., 2019, Saumard et al., 2014, Mackey et al., 2015, Guth et al., 2023, Cryan et al., 2019).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Strongly Log-Concave Priors.