Flow of Information in Feed-Forward Deep Neural Networks
Abstract: Feed-forward deep neural networks have been used extensively in various machine learning applications. Developing a precise understanding of the underling behavior of neural networks is crucial for their efficient deployment. In this paper, we use an information theoretic approach to study the flow of information in a neural network and to determine how entropy of information changes between consecutive layers. Moreover, using the Information Bottleneck principle, we develop a constrained optimization problem that can be used in the training process of a deep neural network. Furthermore, we determine a lower bound for the level of data representation that can be achieved in a deep neural network with an acceptable level of distortion.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.