Papers
Topics
Authors
Recent
Search
2000 character limit reached

The Computational Complexity of Training ReLU(s)

Published 9 Oct 2018 in cs.CC | (1810.04207v2)

Abstract: We consider the computational complexity of training depth-2 neural networks composed of rectified linear units (ReLUs). We show that, even for the case of a single ReLU, finding a set of weights that minimizes the squared error (even approximately) for a given training set is NP-hard. We also show that for a simple network consisting of two ReLUs, the error minimization problem is NP-hard, even in the realizable case. We complement these hardness results by showing that, when the weights and samples belong to the unit ball, one can (agnostically) properly and reliably learn depth-2 ReLUs with $k$ units and error at most $\epsilon$ in time $2{(k/\epsilon){O(1)}}n{O(1)}$; this extends upon a previous work of Goel, Kanade, Klivans and Thaler (2017) which provided efficient improper learning algorithms for ReLUs.

Citations (42)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.