2000 character limit reached
Statistical learning by sparse deep neural networks
Published 15 Nov 2023 in math.ST, cs.LG, stat.ME, stat.ML, and stat.TH | (2311.08845v1)
Abstract: We consider a deep neural network estimator based on empirical risk minimization with l_1-regularization. We derive a general bound for its excess risk in regression and classification (including multiclass), and prove that it is adaptively nearly-minimax (up to log-factors) simultaneously across the entire range of various function classes.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.