Papers
Topics
Authors
Recent
Search
2000 character limit reached

A Stochastic Operator Framework for Optimization and Learning with Sub-Weibull Errors

Published 20 May 2021 in math.OC, cs.SY, and eess.SY | (2105.09884v4)

Abstract: This paper proposes a framework to study the convergence of stochastic optimization and learning algorithms. The framework is modeled over the different challenges that these algorithms pose, such as (i) the presence of random additive errors (e.g. due to stochastic gradients), and (ii) random coordinate updates (e.g. due to asynchrony in distributed set-ups). The paper covers both convex and strongly convex problems, and it also analyzes online scenarios, involving changes in the data and costs. The paper relies on interpreting stochastic algorithms as the iterated application of stochastic operators, thus allowing us to use the powerful tools of operator theory. In particular, we consider operators characterized by additive errors with sub-Weibull distribution (which parameterize a broad class of errors by their tail probability), and random updates. In this framework we derive convergence results in mean and in high probability, by providing bounds to the distance of the current iteration from a solution of the optimization or learning problem. The contributions are discussed in light of federated learning applications.

Citations (2)

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.