Papers
Topics
Authors
Recent
Search
2000 character limit reached

Langevin Markov Chain Monte Carlo with stochastic gradients

Published 22 May 2018 in stat.ME, cs.NA, math.NA, and stat.CO | (1805.08863v2)

Abstract: Monte Carlo sampling techniques have broad applications in machine learning, Bayesian posterior inference, and parameter estimation. Often the target distribution takes the form of a product distribution over a dataset with a large number of entries. For sampling schemes utilizing gradient information it is cheaper for the derivative to be approximated using a random small subset of the data, introducing extra noise into the system. We present a new discretization scheme for underdamped Langevin dynamics when utilizing a stochastic (noisy) gradient. This scheme is shown to bias computed averages to second order in the stepsize while giving exact results in the special case of sampling a Gaussian distribution with a normally distributed stochastic gradient.

Citations (6)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.