Papers
Topics
Authors
Recent
Search
2000 character limit reached

Information Theoretic Lower Bounds for Feed-Forward Fully-Connected Deep Networks

Published 1 Jul 2020 in stat.ML and cs.LG | (2007.00796v2)

Abstract: In this paper, we study the sample complexity lower bounds for the exact recovery of parameters and for a positive excess risk of a feed-forward, fully-connected neural network for binary classification, using information-theoretic tools. We prove these lower bounds by the existence of a generative network characterized by a backwards data generating process, where the input is generated based on the binary output, and the network is parametrized by weight parameters for the hidden layers. The sample complexity lower bound for the exact recovery of parameters is $\Omega(d r \log(r) + p )$ and for a positive excess risk is $\Omega(r \log(r) + p )$, where $p$ is the dimension of the input, $r$ reflects the rank of the weight matrices and $d$ is the number of hidden layers. To the best of our knowledge, our results are the first information theoretic lower bounds.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.