2000 character limit reached
An Algorithm for Approximating Continuous Functions on Compact Subsets with a Neural Network with one Hidden Layer
Published 10 Feb 2019 in cs.LG, math.FA, and stat.ML | (1902.03638v1)
Abstract: George Cybenko's landmark 1989 paper showed that there exists a feedforward neural network, with exactly one hidden layer (and a finite number of neurons), that can arbitrarily approximate a given continuous function $f$ on the unit hypercube. The paper did not address how to find the weight/parameters of such a network, or if finding them would be computationally feasible. This paper outlines an algorithm for a neural network with exactly one hidden layer to reconstruct any continuous scalar or vector valued continuous function.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.