Papers
Topics
Authors
Recent
Search
2000 character limit reached

A Corrective View of Neural Networks: Representation, Memorization and Learning

Published 1 Feb 2020 in cs.LG, math.ST, stat.ML, and stat.TH | (2002.00274v2)

Abstract: We develop a corrective mechanism for neural network approximation: the total available non-linear units are divided into multiple groups and the first group approximates the function under consideration, the second group approximates the error in approximation produced by the first group and corrects it, the third group approximates the error produced by the first and second groups together and so on. This technique yields several new representation and learning results for neural networks. First, we show that two-layer neural networks in the random features regime (RF) can memorize arbitrary labels for arbitrary points under under Euclidean distance separation condition using $\tilde{O}(n)$ ReLUs which is optimal in $n$ up to logarithmic factors. Next, we give a powerful representation result for two-layer neural networks with ReLUs and smoothed ReLUs which can achieve a squared error of at most $\epsilon$ with $O(C(a,d)\epsilon{-1/(a+1)})$ for $a \in \mathbb{N}\cup{0}$ when the function is smooth enough (roughly when it has $\Theta(ad)$ bounded derivatives). In certain cases $d$ can be replaced with effective dimension $q \ll d$. Previous results of this type implement Taylor series approximation using deep architectures. We also consider three-layer neural networks and show that the corrective mechanism yields faster representation rates for smooth radial functions. Lastly, we obtain the first $O(\mathrm{subpoly}(1/\epsilon))$ upper bound on the number of neurons required for a two layer network to learn low degree polynomials up to squared error $\epsilon$ via gradient descent. Even though deep networks can express these polynomials with $O(\mathrm{polylog}(1/\epsilon))$ neurons, the best learning bounds on this problem require $\mathrm{poly}(1/\epsilon)$ neurons.

Citations (18)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.