Sharp Lower Bounds for Linearized ReLU^k Approximation on the Sphere
Abstract: We prove a saturation theorem for linearized shallow ReLU$k$ neural networks on the unit sphere $\mathbb Sd$. For any antipodally quasi-uniform set of centers, if the target function has smoothness $r>\tfrac{d+2k+1}{2}$, then the best $\mathcal{L}2(\mathbb Sd)$ approximation cannot converge faster than order $n{-\frac{d+2k+1}{2d}}$. This lower bound matches existing upper bounds, thereby establishing the exact saturation order $\tfrac{d+2k+1}{2d}$ for such networks. Our results place linearized neural-network approximation firmly within the classical saturation framework and show that, although ReLU$k$ networks outperform finite elements under equal degrees $k$, this advantage is intrinsically limited.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.