Papers
Topics
Authors
Recent
Search
2000 character limit reached

On Neural Networks as Infinite Tree-Structured Probabilistic Graphical Models

Published 27 May 2023 in stat.ML and cs.LG | (2305.17583v5)

Abstract: Deep neural networks (DNNs) lack the precise semantics and definitive probabilistic interpretation of probabilistic graphical models (PGMs). In this paper, we propose an innovative solution by constructing infinite tree-structured PGMs that correspond exactly to neural networks. Our research reveals that DNNs, during forward propagation, indeed perform approximations of PGM inference that are precise in this alternative PGM structure. Not only does our research complement existing studies that describe neural networks as kernel machines or infinite-sized Gaussian processes, it also elucidates a more direct approximation that DNNs make to exact inference in PGMs. Potential benefits include improved pedagogy and interpretation of DNNs, and algorithms that can merge the strengths of PGMs and DNNs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (13)
  1. Analysis of explainers of black box deep neural networks for computer vision: A survey. Machine Learning and Knowledge Extraction, 3(4):966–989, 2021.
  2. Auto-Encoding Variational Bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014.
  3. Probabilistic interpretations of recurrent neural networks. Probabilistic Graphical Models, 2017.
  4. Deep convolutional networks as shallow gaussian processes. arXiv preprint arXiv:1808.05587, 2018.
  5. Shun-ichi Amari. Information geometry of the em and em algorithms for neural networks. Neural networks, 8(9):1379–1408, 1995.
  6. Learning sigmoid belief networks via monte carlo expectation maximization. In Artificial Intelligence and Statistics, pages 1347–1355. PMLR, 2016.
  7. Deep neural networks as point estimates for deep gaussian processes. Advances in Neural Information Processing Systems, 34, 2021.
  8. Neural networks as inter-domain inducing points. In Third Symposium on Advances in Approximate Bayesian Inference, 2020.
  9. Deep gaussian processes. In Artificial intelligence and statistics, pages 207–215. PMLR, 2013.
  10. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pages 807–814, 2010.
  11. The continuous bernoulli: fixing a pervasive error in variational autoencoders, 2019.
  12. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006.
  13. The lottery ticket hypothesis: Finding sparse, trainable neural networks, 2019.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 1 like about this paper.