Papers
Topics
Authors
Recent
Search
2000 character limit reached

Trading-Off Static and Dynamic Regret in Online Least-Squares and Beyond

Published 6 Sep 2019 in cs.LG and stat.ML | (1909.03118v2)

Abstract: Recursive least-squares algorithms often use forgetting factors as a heuristic to adapt to non-stationary data streams. The first contribution of this paper rigorously characterizes the effect of forgetting factors for a class of online Newton algorithms. For exp-concave and strongly convex objectives, the algorithms achieve the dynamic regret of $\max{O(\log T),O(\sqrt{TV})}$, where $V$ is a bound on the path length of the comparison sequence. In particular, we show how classic recursive least-squares with a forgetting factor achieves this dynamic regret bound. By varying $V$, we obtain a trade-off between static and dynamic regret. In order to obtain more computationally efficient algorithms, our second contribution is a novel gradient descent step size rule for strongly convex functions. Our gradient descent rule recovers the order optimal dynamic regret bounds described above. For smooth problems, we can also obtain static regret of $O(T{1-\beta})$ and dynamic regret of $O(T\beta V*)$, where $\beta \in (0,1)$ and $V*$ is the path length of the sequence of minimizers. By varying $\beta$, we obtain a trade-off between static and dynamic regret.

Authors (2)
Citations (23)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.