Strongly Log-Concave Priors
- Strongly log-concave priors are probability measures whose negative log-density is uniformly strongly convex, ensuring powerful concentration and functional inequalities.
- They underpin enhanced Bayesian Cramér–Rao bounds and provide geometric ergodicity for MCMC algorithms, leading to rapid mixing in high-dimensional settings.
- Their closure under linear transformations, products, and convolutions makes them versatile for both continuous and combinatorial models in advanced inference.
A strongly log-concave prior is a probability measure on (or a discrete space) whose (negative) log-density is uniformly strongly convex: for a continuous prior with density , this means the Hessian for all and some . Strong log-concavity is a strengthening of the usual log-concavity condition (where is merely convex) and it guarantees powerful functional inequalities, concentration properties, and algorithmic benefits for Bayesian inference and generative modeling.
1. Mathematical Characterizations and Equivalent Conditions
In the continuous setting, a prior if is strongly convex with parameter , i.e.,
More generally, for covariance , if
Discrete analogs apply, especially for priors on subset selection or combinatorial supports. For an -homogeneous prior over the Boolean lattice with generating polynomial , the strong log-concavity condition is that all partial derivatives of are log-concave at (Saumard et al., 2014, Cryan et al., 2019).
Equivalent conditions include:
- Strong monotonicity of the “relative-score” map.
- The midpoint super-Gaussian inequality,
- Preservation under affine transformations, products, and convolutions.
2. Functional Inequalities and Concentration Properties
Strongly log-concave priors satisfy powerful concentration results. For with , one obtains sub-Gaussian tails and sharp bounds: yielding a spectral gap and a log-Sobolev inequality with constant (Saumard et al., 2014).
For discrete -homogeneous SLC priors, a modified log-Sobolev inequality holds for the bases-exchange Markov chain, with constant . Concentration of Lipschitz observables follows: where is the Lipschitz constant (Cryan et al., 2019).
3. Implications for Bayesian Cramér–Rao Bounds
Strongly log-concave priors enable sharper, dimension-explicit Bayesian Cramér–Rao bounds. For prior density with ,
for in the scalar case and the Fisher information of the likelihood. Crucially, the bound is independent of the prior's Fisher information , replacing it with the convexity constant (Aras et al., 2019). This enables robust minimum variance guarantees for any estimator, biased or not, and removes technical regularity requirements that typically restrict the classical van Trees inequality.
4. Sampling Algorithms and MCMC Mixing
Strong log-concavity ensures geometric ergodicity and rapid mixing for likelihood-targeted MCMC algorithms, notably overdamped Langevin dynamics and Metropolis-adjusted Langevin algorithms (MALA). Mixing times scale polynomially with dimension, and explicit bounds in terms of the convexity parameter () can be given: where is the MCMC iterate and the stationary SLC prior (Saumard et al., 2014, Guth et al., 2023).
Sampling from SLC priors in multiscale generative models is further enabled via conditionally strongly log-concave (CSLC) decompositions, where hierarchical orthogonal projectors factor the data into blocks with their own SLC guarantees. Efficient MALA in each block leads to exponential convergence in KL divergence at rate per block (Guth et al., 2023).
5. Stein Discrepancy, Integration Error, and Quality Diagnostic
For SLC densities , one can control Wasserstein distances and smooth function errors using quantitative Stein factor bounds. For test function ,
with the strong log-concavity parameter. This enables explicit computation of Stein discrepancy and tight certification of cubature/sampling error: where is defined by Stein-factor bounds. Small guarantees small Wasserstein and bounded-Lipschitz discrepancies (Mackey et al., 2015).
6. Closure Properties and Canonical Examples
SLC priors are preserved under:
- Linear transformations: Covariance transforms accordingly,
- Products: Joint SLC parameter is block-diagonal,
- Convolutions: SLC parameter adds,
- Marginalization: SLC covariances retain block structure (Saumard et al., 2014).
Canonical examples: | Density Model | SLC Condition | Hessian Lower Bound | |---------------------------------------|-----------------------------------|-----------------------------| | Multivariate Gaussian | | | | Subbotin, (Gaussian) | | $1$ | | Weibull, | | | | Brownian bridge supremum | , | |
Log-concave densities such as the logistic fail to be SLC due to vanishing Hessian lower bounds (Saumard et al., 2014).
7. Practical Verification and Checklist
To verify SLC for a candidate prior :
- Compute .
- Check .
- Form .
- Find .
- If , is SLC with parameter (Saumard et al., 2014).
For combinatorial or matroid priors, check the strong log-concavity of the generating polynomial and its directional derivatives (Cryan et al., 2019).
In summary, strongly log-concave priors unify variational, concentration, and sampling theoretical tools in Bayesian inference, offering robust guarantees for estimator variance, mixing time, and error control. Their multidimensional and combinatorial generalizations enable concrete, theory-backed solutions even in high-dimensional and discrete model-selection contexts (Aras et al., 2019, Saumard et al., 2014, Mackey et al., 2015, Guth et al., 2023, Cryan et al., 2019).