Papers
Topics
Authors
Recent
Search
2000 character limit reached

A General Framework for Cutting Feedback within Modularised Bayesian Inference

Published 7 Nov 2022 in stat.ME, math.ST, and stat.TH | (2211.03274v3)

Abstract: Standard Bayesian inference can build models that combine information from various sources, but this inference may not be reliable if components of a model are misspecified. Cut inference, as a particular type of modularized Bayesian inference, is an alternative which splits a model into modules and cuts the feedback from the suspect module. Previous studies have focused on a two-module case, but a more general definition of a "module" remains unclear. We present a formal definition of a "module" and discuss its properties. We formulate methods for identifying modules; determining the order of modules; and building the cut distribution that should be used for cut inference within an arbitrary directed acyclic graph structure. We justify the cut distribution by showing that it not only cuts the feedback but also is the best approximation satisfying this condition to the joint distribution in the Kullback-Leibler divergence. We also extend cut inference for the two-module case to a general multiple-module case via a sequential splitting technique and demonstrate this via illustrative applications.

Citations (7)

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.