Papers
Topics
Authors
Recent
Search
2000 character limit reached

A strategy to avoid particle depletion in recursive Bayesian inference

Published 3 Aug 2025 in stat.ME and stat.CO | (2508.01572v1)

Abstract: Recursive Bayesian inference, in which posterior beliefs are updated in light of accumulating data, is a tool for implementing Bayesian models in applications with streaming and/or very large data sets. As the posterior of one iteration becomes the prior for the next, beliefs are updated sequentially instead of all-at-once. Thus, recursive inference is relevant for both streaming data and settings where data too numerous to be analyzed together can be partitioned into manageable pieces. In practice, posteriors are characterized by samples obtained using, e.g., acceptance/rejection sampling in which draws from the posterior of one iteration are used as proposals for the next. While simple to implement, such filtering approaches suffer from particle depletion, degrading each sample's ability to represent its target posterior. As a remedy, we investigate generating proposals from a smoothed version of the preceding sample's empirical distribution. The method retains computationally valuable properties of similar methods, but without particle depletion, and we demonstrate its accuracy in simulation. We apply the method to data simulated from both a simple, logistic regression model as well as a hierarchical model originally developed for classifying forest vegetation in New Mexico using satellite imagery.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.