Papers
Topics
Authors
Recent
Search
2000 character limit reached

Randomized Greedy Algorithms for Neural Network Optimization

Published 25 Jul 2024 in math.NA and cs.NA | (2407.17763v3)

Abstract: Greedy algorithms have been successfully analyzed and applied in training neural networks for solving variational problems, ensuring guaranteed convergence orders. In this paper, we extend the analysis of the orthogonal greedy algorithm (OGA) to convex optimization problems, establishing its optimal convergence rate. This result broadens the applicability of OGA by generalizing its optimal convergence rate from function approximation to convex optimization problems. In addition, we also address the issue regarding practical applicability of greedy algorithms, which is due to significant computational costs from the subproblems that involve an exhaustive search over a discrete dictionary. We propose to use a more practical approach of randomly discretizing the dictionary at each iteration of the greedy algorithm. We quantify the required size of the randomized discrete dictionary and prove that, with high probability, the proposed algorithm realizes a weak greedy algorithm, achieving optimal convergence orders. Through numerous numerical experiments on function approximation, linear and nonlinear elliptic partial differential equations, we validate our analysis on the optimal convergence rate and demonstrate the advantage of using randomized discrete dictionaries over a deterministic one by showing orders of magnitude reductions in the size of the discrete dictionary, particularly in higher dimensions.

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 1 like about this paper.