Papers
Topics
Authors
Recent
Search
2000 character limit reached

Affine-Invariant Ensemble MCMC

Updated 4 February 2026
  • Affine-Invariant Ensemble MCMC is a sampling method that maintains an ensemble of walkers and is invariant under any affine transformation.
  • The algorithm employs stretch moves and other variants to efficiently explore high-dimensional, correlated, or anisotropic target distributions.
  • It offers robust performance in Bayesian computations and function-space problems without needing gradient or Hessian information.

An affine-invariant ensemble Markov chain Monte Carlo (MCMC) algorithm is a class of MCMC methods that maintains an ensemble of parallel “walkers”, with proposal moves and acceptance mechanisms designed so that the sampler’s behavior is unaffected by any invertible affine transformation of the target density. These methods have become canonical tools for efficiently sampling high-dimensional, strongly correlated, or highly anisotropic probability distributions, particularly when access to gradients or Hessians is unavailable. Their defining property is affine invariance: all proposal statistics and acceptance ratios transform properly under xAx+bx \mapsto Ax + b for invertible AA and arbitrary vector bb.

1. Principles of Affine-Invariant Ensemble MCMC

The prototypical affine-invariant ensemble sampler is the Goodman–Weare (GW) “stretch move” (Hou et al., 2011, Coullon et al., 2020, Foreman-Mackey et al., 2012, Foreman-Mackey et al., 2019, Huijser et al., 2015). Suppose the target is a density π\pi on RM\mathbb{R}^M. An ensemble of LL walkers X=(X1,,XL)RM×L\vec{X} = (X_1, \dots, X_L) \in \mathbb{R}^{M \times L} is jointly updated according to a product law l=1Lπ(Xl)\prod_{l=1}^L \pi(X_l). For each walker XiX_i:

  • Pick another walker XjXiX_j \ne X_i uniformly at random.
  • Draw a stretch factor Z[1/a,a]Z \in [1/a, a] from g(Z)Z1/2g(Z) \propto Z^{-1/2}.
  • Propose

X~i=Xi+(1Z)(XjXi)=Xj+Z1(XiXj).\tilde X_i = X_i + (1 - Z)(X_j - X_i) = X_j + Z^{-1}(X_i - X_j).

  • Accept with probability

α=min{1,ZM1π(X~i)π(Xi)}.\alpha = \min\left\{1, Z^{M-1} \frac{\pi(\tilde X_i)}{\pi(X_i)}\right\}.

Affine invariance follows because under any invertible xAx+bx \mapsto Ax + b, the distribution and dynamics of proposals and acceptances are unchanged; the proposal and acceptance Jacobian terms exactly cancel (Coullon et al., 2020, Hou et al., 2011, Foreman-Mackey et al., 2012, Foreman-Mackey et al., 2019).

2. Algorithmic Variants and Extensions

Multiple affine-invariant ensemble algorithms have been proposed, often differing by choice of move type, adaptation scheme, or by targeting broader problem classes:

  • Stretch move (AIES): The original GW algorithm as summarized above (Coullon et al., 2020, Hou et al., 2011, Foreman-Mackey et al., 2012).
  • Ensemble Slice Sampler: Proposes walker moves using adaptive, affine-invariant slice sampling directions determined by differences or covariance among walkers (Karamanis et al., 2020). Parallel updates across walker subgroups and length-scale adaptation yield both affine invariance and robust mixing.
  • Penalised t-walk: Extends the affine-invariant t-walk with specialized “penalty” moves to cross isolated modes, maintaining affine invariance under arbitrary full-rank affine maps (Medina-Aguayo et al., 2020).
  • Second-order and interacting Langevin ensemble dynamics: Preconditioned ensemble-based Langevin samplers (e.g., EKHMC and ALDI) perform covariance-adapted diffusive sampling and are provably affine invariant, leveraging ensemble statistics for both drift and stochastic terms (Liu et al., 2022, Beh et al., 25 Jun 2025).
  • Infinite-dimensional generalizations: The functional ensemble sampler (FES) applies the AIES move on a fixed KL-truncated subspace and the pCN move on the infinite-dimensional orthogonal complement, achieving mesh-independent, gradient-free sampling in infinite-dimensional settings (Coullon et al., 2020). Other hybrid methods apply subspace-projected or covariance-inflated proposals to combine affine invariance and dimension-robustness (Dunlop et al., 2022).

3. Affine-Invariance: Theory and Proofs

For a proposal kernel K(x,dx)K(x, dx') and target density π\pi, affine invariance requires

K(Ax+b,d(Ax+b))=K(x,dx)K(Ax + b, d(Ax' + b)) = K(x, dx')

for any invertible AA, bb, with the transformed target πT(Ax+b)=π(x)detA1\pi_T(Ax+b) = \pi(x) |\det A|^{-1}. For the stretch move,

α=min{1,ZM1π(x~)π(x)}\alpha = \min\left\{1, Z^{M-1} \frac{\pi(\tilde x)}{\pi(x)} \right\}

transforms under T(x)=Ax+bT(x)=Ax+b as

αT=min{1,ZM1πT(Tx~)πT(Tx)}=min{1,ZM1π(x~)π(x)}.\alpha_T = \min\left\{1, Z^{M-1} \frac{\pi_T(T\tilde x)}{\pi_T(Tx)} \right\} = \min\left\{1, Z^{M-1} \frac{\pi(\tilde x)}{\pi(x)} \right\}.

Since the proposal and acceptance ratios are unchanged under the affine map, the process is invariant (Coullon et al., 2020, Foreman-Mackey et al., 2019, Hou et al., 2011).

Generalizations (e.g., to covariance-preconditioned Langevin SDEs) use similar arguments: the drift and noise are adapted by the ensemble covariance, which transforms as AΣAA \Sigma A^\top under xAx+bx \mapsto Ax + b, ensuring affine invariance of the Fokker–Planck operator (Liu et al., 2022, Beh et al., 25 Jun 2025).

4. Applications, Scalability, and Limitations

Affine-invariant ensemble samplers are widely used in astrophysics, Bayesian inverse problems, and high-dimensional data analysis due to their ability to handle strongly anisotropic targets without explicit covariance estimation (Hou et al., 2011, Foreman-Mackey et al., 2012, Coullon et al., 2020).

In infinite-dimensional or function-space problems, the FES algorithm applies AIES to a fixed finite subspace (chosen via Karhunen–Loève expansion) and pCN or other schemes for the complement, ensuring mesh-independent mixing rates and robust performance as resolution increases (Coullon et al., 2020). Subspace-adjusted hybrid samplers extend these ideas to adaptively select the most informative modes for affine-invariant moves (Dunlop et al., 2022).

Empirically, for moderate dimension M50M\lesssim 50, ensemble stretch-move samplers offer order-of-magnitude efficiency gains (as measured by integrated autocorrelation time/IAT) over random-walk or non-adaptive slice samplers, and are robust to affine ill-conditioning (Karamanis et al., 2020, Foreman-Mackey et al., 2012). In very high dimensions, however, the AIES suffers from ensemble collapse—rapidly shrinking walker variance and inadequate exploration—leading to biased variance estimates and slow convergence, as rigorously characterized in (Huijser et al., 2015). Effective ensemble sizes shrink and long burn-in followed by slow “re-expansion” phases can degrade sampling unless supplemented by external regularization or more sophisticated dynamics.

5. Computational Structure and Implementation

A distinctive feature is ensemble parallelism: updates to walkers in one subset are independent given the positions of the complementary ensemble, enabling trivially parallel implementations. In practice, packages such as emcee (Foreman-Mackey et al., 2012, Foreman-Mackey et al., 2019) realize these methods by splitting LL walkers into two groups, alternately updating each half using the current positions of the other, and exposing a modular interface for multiple affine-invariant and ensemble-adapted moves.

Tuning is minimal: the principal parameters are the stretch factor range parameter aa (usually a=2a=2), the ensemble size LL (with L>2ML > 2M recommended), and the dimension of the low-rank subspace in infinite-dimensional settings (Coullon et al., 2020, Foreman-Mackey et al., 2012). Diagnostics rely on ensemble-wide summaries; acceptance rates between $0.2$–$0.5$ and comparison of empirical variance across independent runs are standard.

6. Benchmarks and Empirical Behavior

The performance of different affine-invariant ensemble algorithms is problem-dependent. Table 1 summarizes selected results from (Karamanis et al., 2020, Huijser et al., 2015, Coullon et al., 2020).

Algorithm Setting (dim) IAT / Efficiency Gain Notes
AIES / GW stretch AR(1) Gaussian (D=50D=50) IAT ≈ 5×10⁴ Collapse/re-expansion in D1D\gg1
Ensemble Slice AR(1) Gaussian (D=50D=50) IAT ≈ 110 (×10-×20 gain) Robust, affine-invariant
FES Advection (M=10 KL) IAT ≈ 1.5×10³ (×100 faster) Infinite-dimensional, mesh-robust
FES Langevin path rec. IAT(log α) ≈ 1.2×10⁴ Beats hybrid/adaptive Gaussian RW

Empirical studies confirm that affine-invariant ensemble algorithms perform best for moderate-dimensional, highly anisotropic or correlated targets, and when gradient information is inaccessible or costly. In strongly multimodal/posteriors, extensions such as penalized moves or power/parallel tempering (penalized t-walk) enable improved global exploration (Medina-Aguayo et al., 2020). In high dimension or infinite-dimensional inverse problems, subspace-based or hybrid ensemble-pCN methods and FES offer mixing times that do not grow with discretization size (Coullon et al., 2020, Dunlop et al., 2022).

7. Current Directions and Theoretical Developments

Active research continues on addressing high-dimensional pathologies, integrating ensemble affine-invariant dynamics with gradient-informed (Langevin or Hamiltonian) moves (Beh et al., 25 Jun 2025, Liu et al., 2022), and extending affine-invariant adaptation to non-Gaussian priors, non-linear inverse problems, and rare-event or importance sampling regimes.

Theory now firmly establishes the dimension-robustness of affine-invariant moves on fixed-dimensional subspaces and in mean-field limits, but highlights the need for caution with naive application in M50M\gg 50 unless coupled with hybridization, annealing, or carefully designed subspace updates (Huijser et al., 2015, Coullon et al., 2020, Dunlop et al., 2022).

Affine-invariant ensemble MCMC remains indispensable for “black-box”, large-scale, and function-space Bayesian computations that require tuning-free, derivative-free, and computationally scalable sampling.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Affine-Invariant Ensemble MCMC Algorithm.