Papers
Topics
Authors
Recent
Search
2000 character limit reached

Online and Stochastic Universal Gradient Methods for Minimizing Regularized Hölder Continuous Finite Sums

Published 15 Nov 2013 in math.NA | (1311.3832v5)

Abstract: Online and stochastic gradient methods have emerged as potent tools in large scale optimization with both smooth convex and nonsmooth convex problems from the classes $C{1,1}(\realsp)$ and $C{1,0}(\realsp)$ respectively. However to our best knowledge, there is few paper to use incremental gradient methods to optimization the intermediate classes of convex problems with H\"older continuous functions $C{1,v}(\realsp)$. In order fill the difference and gap between methods for smooth and nonsmooth problems, in this work, we propose the several online and stochastic universal gradient methods, that we do not need to know the actual degree of smoothness of the objective function in advance. We expanded the scope of the problems involved in machine learning to H\"older continuous functions and to propose a general family of first-order methods. Regret and convergent analysis shows that our methods enjoy strong theoretical guarantees. For the first time, we establish an algorithms that enjoys a linear convergence rate for convex functions that have H\"older continuous gradients.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.