2000 character limit reached
When does gradient descent with logistic loss find interpolating two-layer networks?
Published 4 Dec 2020 in stat.ML, cs.LG, and math.OC | (2012.02409v4)
Abstract: We study the training of finite-width two-layer smoothed ReLU networks for binary classification using the logistic loss. We show that gradient descent drives the training loss to zero if the initial loss is small enough. When the data satisfies certain cluster and separation conditions and the network is wide enough, we show that one step of gradient descent reduces the loss sufficiently that the first result applies.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.