Papers
Topics
Authors
Recent
Search
2000 character limit reached

Statistical learning with Lipschitz and convex loss functions

Published 2 Oct 2018 in math.ST and stat.TH | (1810.01090v2)

Abstract: We obtain risk bounds for Empirical Risk Minimizers (ERM) and minmax Median-Of-Means (MOM) estimators based on loss functions that are both Lipschitz and convex. Results for the ERM are derived without assumptions on the outputs and under subgaussian assumptions on the design and a new "local Bernstein assumption" on the class of predictors. Similar results are shown for minmax MOM estimators in a close setting where the design is only supposed to satisfy moment assumptions, relaxing the Subgaussian hypothesis necessary for ERM. The analysis of minmax MOM estimators is not based on the small ball assumption (SBA) as it was the case in the first analysis of minmax MOM estimators. In particular, the basic example of non parametric statistics where the learning class is the linear span of localized bases, that does not satisfy SBA can now be handled. Finally, minmax MOM estimators are analysed in a setting where the local Bernstein condition is also dropped out. It is shown to achieve an oracle inequality with exponentially large probability under minimal assumptions insuring the existence of all objects.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.