Papers
Topics
Authors
Recent
Search
2000 character limit reached

A deep network construction that adapts to intrinsic dimensionality beyond the domain

Published 6 Aug 2020 in stat.ML, cs.LG, math.ST, and stat.TH | (2008.02545v3)

Abstract: We study the approximation of two-layer compositions $f(x) = g(\phi(x))$ via deep networks with ReLU activation, where $\phi$ is a geometrically intuitive, dimensionality reducing feature map. We focus on two intuitive and practically relevant choices for $\phi$: the projection onto a low-dimensional embedded submanifold and a distance to a collection of low-dimensional sets. We achieve near optimal approximation rates, which depend only on the complexity of the dimensionality reducing map $\phi$ rather than the ambient dimension. Since $\phi$ encapsulates all nonlinear features that are material to the function $f$, this suggests that deep nets are faithful to an intrinsic dimension governed by $f$ rather than the complexity of the domain of $f$. In particular, the prevalent assumption of approximating functions on low-dimensional manifolds can be significantly relaxed using functions of type $f(x) = g(\phi(x))$ with $\phi$ representing an orthogonal projection onto the same manifold.

Citations (13)

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.