Papers
Topics
Authors
Recent
Search
2000 character limit reached

Metropolis-Hastings with Averaged Acceptance Ratios

Published 29 Dec 2020 in stat.CO and stat.ME | (2101.01253v1)

Abstract: Markov chain Monte Carlo (MCMC) methods to sample from a probability distribution $\pi$ defined on a space $(\Theta,\mathcal{T})$ consist of the simulation of realisations of Markov chains ${\theta_{n},n\geq1}$ of invariant distribution $\pi$ and such that the distribution of $\theta_{i}$ converges to $\pi$ as $i\rightarrow\infty$. In practice one is typically interested in the computation of expectations of functions, say $f$, with respect to $\pi$ and it is also required that averages $M{-1}\sum_{n=1}{M}f(\theta_{n})$ converge to the expectation of interest. The iterative nature of MCMC makes it difficult to develop generic methods to take advantage of parallel computing environments when interested in reducing time to convergence. While numerous approaches have been proposed to reduce the variance of ergodic averages, including averaging over independent realisations of ${\theta_{n},n\geq1}$ simulated on several computers, techniques to reduce the "burn-in" of MCMC are scarce. In this paper we explore a simple and generic approach to improve convergence to equilibrium of existing algorithms which rely on the Metropolis-Hastings (MH) update, the main building block of MCMC. The main idea is to use averages of the acceptance ratio w.r.t. multiple realisations of random variables involved, while preserving $\pi$ as invariant distribution. The methodology requires limited change to existing code, is naturally suited to parallel computing and is shown on our examples to provide substantial performance improvements both in terms of convergence to equilibrium and variance of ergodic averages. In some scenarios gains are observed even on a serial machine.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.