Papers
Topics
Authors
Recent
Search
2000 character limit reached

Statistical Rejection Sampling Optimization

Updated 24 January 2026
  • RSO is a meta-framework that optimizes classical rejection sampling via adaptive proposals, envelope refinement, and divergence minimization to achieve efficient and robust sampling.
  • It unifies methods like entropy-optimal discrete sampling and adaptive envelope optimization, offering provable theoretical guarantees and improved computational efficiency.
  • RSO has broad applications in Bayesian inference, high-dimensional structured modeling, and machine learning, including enhanced variational inference and LLM alignment.

Statistical Rejection Sampling Optimization (RSO) encompasses a collection of algorithms and frameworks that optimize classical rejection sampling schemes for efficient, robust, and often provably optimal sampling from complex distributions. RSO unifies insights from information theory, adaptive proposal construction, hybrid variational inference, and algorithmic learning to achieve optimality in entropy, computational cost, or accuracy. Its applications range from discrete random variate generation and Bayesian variational inference to preference modeling in large-scale machine learning and high-dimensional structured modeling.

1. Fundamental Principles of Rejection Sampling Optimization

Statistical Rejection Sampling involves generating candidate samples from an easily sampled proposal distribution and accepting or rejecting them based on their importance weights with respect to the unnormalized target density. The classical acceptance probability is

Paccept(x)=f(x)Mg(x),P_{\mathrm{accept}}(x) = \frac{f(x)}{M g(x)},

where f(x)f(x) is the target density (possibly unnormalized), %%%%1%%%% is the proposal, and Msupxf(x)/g(x)M \geq \sup_x f(x)/g(x) is a tight upper bound.

Optimization in the RSO paradigm aims to select gg and MM so as to maximize acceptance probability (minimize MM), ensure theoretical guarantees (e.g., unbiasedness, minimal divergence), and scale efficiently in memory and computational resources. RSO extends classical rejection sampling by optimizing envelopes, adaptively refining proposals, linking to divergence minimization, or improving entropy efficiency.

2. Entropy-Optimal RSO for Discrete Distributions

A prominent RSO instantiation is the Amplified Loaded Dice Roller (ALDR) for sampling from a discrete distribution P=(p1,,pn)P = (p_1, \ldots, p_n) with rational probabilities pi=ai/mp_i = a_i/m via an unbiased entropy source (coin flips) (Draper et al., 5 Apr 2025). The ALDR constructs a dyadic-proposal tree through a preprocessing phase:

  • Inputs: integer weights (a1,,an)(a_1, \dots, a_n), total m=aim = \sum a_i.
  • Amplification: Choose KK so 2Km2^K \gg m, set c=2K/mc = \lfloor 2^K/m \rfloor, define amplified weights AiA_i (A0A_0 for reject).
  • Build arrays LL (leaf counts per level) and FF (flattened leaf labels) in O(nlogm)O(n \log m) time and space.

Sampling proceeds by bitwise descent in the tree: On leaf hit, return ii if i0i\neq0; otherwise, restart. The expected entropy cost per sample E[C]E[C] satisfies

H(P)E[C]<H(P)+2,H(P) \leq E[C] < H(P) + 2,

where H(P)H(P) is the Shannon entropy. This achieves strict information-theoretic optimality within [H(P),H(P)+2)[H(P), H(P) + 2), using only O(nlogm)O(n \log m) storage and preprocessing—no prior discrete sampler achieved this (Draper et al., 5 Apr 2025).

Empirical results show ALDR outperforms the alias method in both entropy efficiency and wall-clock sampling time, especially for sparse or low-entropy distributions (Draper et al., 5 Apr 2025).

3. RSO in Monte Carlo Variational Inference

Several RSO approaches refine variational inference through a rejection sampling lens. In "Refined α\alpha-Divergence Variational Inference via Rejection Sampling" (Sharma et al., 2019), the key observation is that the worst-case density ratio, defined by the minimal M=supxp~(x)/qθ(x)M = \sup_x \tilde{p}(x)/q_\theta(x), equates to the α\alpha \to \infty Rényi divergence: D(pqθ)=logM(θ)D_\infty(p \| q_\theta) = \log M(\theta). The presented "two-stage" algorithm combines:

  • Stage 1: Minimize Monte Carlo estimates of Dα(pq)D_\alpha(p||q) for finite α\alpha to optimize qθq_\theta.
  • Stage 2: Use the learned qθq_\theta to perform rejection sampling with the envelope M~(θ)\tilde{M}(\theta), forming an improved, sample-based approximation.

Theoretical results guarantee

Dα(prθ)Dα(pqθ)D_\alpha(p \| r_\theta) \leq D_\alpha(p \| q_\theta)

for all finite α\alpha, with strict improvement unless the rejection step approaches triviality.

A similar principle underlies "Variational Rejection Sampling" (grover et al., 2018), where a smooth threshold parameter controls the trade-off between computational cost and posterior tightness. Accepted samples from the proposal qϕq_\phi are upweighted in the ELBO by their model likelihood, leading to significant improvements in marginal likelihood estimation.

4. Adaptive and Structured Envelope Optimization

A key family of RSO methods involves piecewise or data-driven proposal refinement.

(a) Adaptive Envelopes and Piecewise Majorization

The Vertical Weighted Strips (VWS) framework (Raim et al., 2024, Raim et al., 21 Sep 2025) constructs proposals by partitioning the domain into KK strips and assigning each a local supremum (majorizer) and infimum (minorizer) of the weight function w(x)w(x). The finite mixture proposal

h(x)=k=1Kπkgk(x)h(x) = \sum_{k=1}^K \pi_k g_k(x)

delivers tunable acceptance rates, with analytic pre-sampling upper bounds: Preject1kξkkξˉk.P_{\mathrm{reject}} \leq 1 - \frac{\sum_k \underline{\xi}_k}{\sum_k \bar{\xi}_k}. Adaptive partitioning splits high-contribution strips, driving rejection probability below a user-specified target.

In the context of Gibbs sampling, self-tuned VWS maintains and refines persistent proposals for each conditional as the MCMC chain progresses, balancing refinement cost against rejection rates (Raim et al., 21 Sep 2025). In large-scale Bayesian applications such as small area estimation, self-tuned VWS dramatically improved effective sample size and eliminated autocorrelation in posterior draws.

(b) Generalized Adaptive Rejection Schemes

Beyond log-concave densities, (Martino et al., 2011) develops two adaptive envelope strategies—one piecewise and one using the ratio-of-uniforms representation—to handle multimodal and log-convex-tailed targets. Each rejected sample introduces a new support point, tightening local bounds and monotonically increasing acceptance probability.

5. RSO for Preference-Based Policy Optimization

In LLM alignment, "Statistical Rejection Sampling Improves Preference Optimization" (Liu et al., 2023) proposes RSO to bridge the sampling mismatch between target optimum and data-collecting distributions in Direct Preference Optimization (DPO) and Sequence Likelihood Calibration (SLiC). The key steps are:

  • Compute the closed-form optimal policy π(yx)π0(yx)exp[r(x,y)/β]\pi^*(y|x) \propto \pi_0(y|x) \exp[r(x,y)/\beta].
  • Use π0\pi_0 as the proposal and accept yy with probability exp[(r^(x,y)Rmax)/β]\exp[(\hat{r}(x,y) - R_{\max})/\beta].
  • Aggregate accepted samples for unbiased loss-based policy updates.

This explicitly generates on-policy preference pairs, yielding empirically higher win rates versus SFT and DPO-trained baselines on multiple LLM alignment benchmarks (Liu et al., 2023).

6. RSO in Algorithmic Optimization and Learning

RSO also describes optimization strategies outside of probabilistic inference.

(a) Random Search Optimization for Neural Nets

"RSO: A Gradient Free Sampling Based Approach For Training Deep Neural Networks" (Tripathi et al., 2020) explores a perturb-and-reject Markov chain over neural network parameters: Each weight is proposed for random perturbation, and the update is accepted only if the loss strictly decreases. Despite the absence of gradients, RSO efficiently discovers performant solutions with an order-of-magnitude fewer logical weight-updates than SGD on MNIST and CIFAR-10, though with greater per-iteration cost (Tripathi et al., 2020).

(b) OS* Algorithm for Unified Sampling and Optimization

The OS* algorithm (Dymetman et al., 2012) generalizes RSO: it iteratively maintains an upper bounding proposal (efficient for either sampling or optimization), incrementally refines it using rejected samples, and provably concentrates computational resources on high-probability regions. This joint approach to exact sampling and search exploits locally tractable bounds and A*-style search in high-dimensional discrete or graphical-model settings.

7. Theoretical Characterization and Efficiency Boundaries

Theoretical analysis in RSO benchmarks optimality in entropy, divergence reduction, and mean-squared error or variance. For example:

  • The minimax lower bound for adaptive rejection sampling guarantees that, absent additional structure, no method can achieve a rejection rate exceeding O(ns/d)O(n^{-s/d}) (up to logarithmic factors) for nn target density evaluations where ff has Hölder regularity ss in dd dimensions (Achdou et al., 2018).
  • In variational inference, variational rejection sampling monotically tightens the ELBO and interpolates between a loose, cheap bound and exact posterior approximation at the cost of increased computation (grover et al., 2018, Sharma et al., 2019).
  • In the discrete entropy-optimal case, ALDR matches the Knuth-Yao lower bound within 2 bits of entropy, with no exponential scaling in proposal space (Draper et al., 5 Apr 2025).

8. Empirical Impact and Application Breadth

RSO methods have demonstrated significant improvements across domains:

  • In structured variate generation, ALDR achieves lower entropy cost and faster wall-clock sampling than the alias method for a broad class of discrete distributions (Draper et al., 5 Apr 2025).
  • Variational RSO approaches dominate adaptive-ff-divergence and classic RDVI baselines in latent-variable models and Bayesian neural networks (grover et al., 2018, Sharma et al., 2019).
  • Self-tuned VWS proposals in Gibbs sampling enable exact draws from nonstandard univariate conditionals—a key advance in large-scale Bayesian small-area estimation models (Raim et al., 21 Sep 2025, Raim et al., 2024).
  • In LLM alignment, RSO delivers on-policy data and unbiased learning, improving human preference and automatic win rates (Liu et al., 2023).

RSO thus represents a meta-framework—encompassing both principled, information-bound methods and pragmatic, adaptive engineering—for optimizing sampling, inference, and learning wherever rejection-based schemes provide a tractable, exact mechanism but require careful control of proposal design, envelope tightness, or theoretical risk.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Statistical Rejection Sampling Optimization (RSO).