Papers
Topics
Authors
Recent
Search
2000 character limit reached

A variational neural Bayes framework for inference on intractable posterior distributions

Published 16 Apr 2024 in stat.CO and stat.ML | (2404.10899v1)

Abstract: Classic Bayesian methods with complex models are frequently infeasible due to an intractable likelihood. Simulation-based inference methods, such as Approximate Bayesian Computing (ABC), calculate posteriors without accessing a likelihood function by leveraging the fact that data can be quickly simulated from the model, but converge slowly and/or poorly in high-dimensional settings. In this paper, we propose a framework for Bayesian posterior estimation by mapping data to posteriors of parameters using a neural network trained on data simulated from the complex model. Posterior distributions of model parameters are efficiently obtained by feeding observed data into the trained neural network. We show theoretically that our posteriors converge to the true posteriors in Kullback-Leibler divergence. Our approach yields computationally efficient and theoretically justified uncertainty quantification, which is lacking in existing simulation-based neural network approaches. Comprehensive simulation studies highlight our method's robustness and accuracy.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. Proceedings of the 36th International Conference on Machine Learning.
  2. Journal of agricultural, biological, and environmental statistics, 19, 451–469.
  3. In Dependence in probability and statistics, 373–390. Springer.
  4. Proceedings of the National Academy of Sciences, 117, 30055–30062.
  5. De Haan, L. (1984) A spectral representation for max-stable processes. The Annals of Probability, 1194–1204.
  6. Advances in Neural Information Processing Systems, 35, 23135–23149.
  7. Journal of the Royal Statistical Society Series B: Statistical Methodology, 46, 193–212.
  8. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 74, 419–474.
  9. Biometrika, 105, 593–607.
  10. Geary, R. C. (1954) The contiguity ratio and statistical mapping. The incorporated statistician, 5, 115–146.
  11. Journal of the American Statistical Association, 88, 881–889.
  12. Stat, 10, e382.
  13. Journal of the American statistical Association, 102, 359–378.
  14. Dataset shift in machine learning, 3, 5.
  15. Springer.
  16. arXiv preprint arXiv:0806.2780.
  17. Journal of the Operations Research Society of America, 1, 263–278.
  18. Computational Statistics & Data Analysis, 185, 107762.
  19. arXiv preprint arXiv:2303.15041.
  20. Computational Statistics & Data Analysis, 106, 77–89.
  21. In International conference on machine learning, 3122–3130. PMLR.
  22. Advances in neural information processing systems, 30.
  23. Proceedings of the National Academy of Sciences, 100, 15324–15328.
  24. Advances in Neural Information Processing Systems, 29.
  25. Advances in Neural Information Processing Systems, 30.
  26. Statistics and Computing, 22, 1209–1222.
  27. arXiv preprint arXiv:2305.14972.
  28. In International Conference on Machine Learning, 1530–1538. PMLR.
  29. arXiv preprint arXiv:2306.15642 (2023).
  30. arXiv preprint arXiv:2310.02600.
  31. arXiv preprint arXiv:2208.12942.
  32. Shimodaira, H. (2000) Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90, 227–244.
  33. CRC Press.
  34. Proceedings of the National Academy of Sciences, 104, 1760–1765.
  35. arXiv preprint arXiv:2208.03157.
Citations (2)

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We found no open problems mentioned in this paper.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 12 likes about this paper.