Papers
Topics
Authors
Recent
Search
2000 character limit reached

Learning from Survey Training Samples: Rate Bounds for Horvitz-Thompson Risk Minimizers

Published 11 Oct 2016 in math.ST and stat.TH | (1610.03316v2)

Abstract: The generalization ability of minimizers of the empirical risk in the context of binary classification has been investigated under a wide variety of complexity assumptions for the collection of classifiers over which optimization is performed. In contrast, the vast majority of the works dedicated to this issue stipulate that the training dataset used to compute the empirical risk functional is composed of i.i.d. observations. Beyond the cases where training data are drawn uniformly without replacement among a large i.i.d. sample or modelled as a realization of a weakly dependent sequence of r.v.'s, statistical guarantees when the data used to train a classifier are drawn by means of a more general sampling/survey scheme and exhibit a complex dependence structure have not been documented yet. It is the main purpose of this paper to show that the theory of empirical risk minimization can be extended to situations where statistical learning is based on survey samples and knowledge of the related inclusion probabilities. Precisely, we prove that minimizing a weighted version of the empirical risk, refered to as the Horvitz-Thompson risk (HT risk), over a class of controlled complexity lead to a rate for the excess risk of the order $O_{\mathbb{P}}((\kappa_N (\log N)/n){1/2})$ with $\kappa_N=(n/N)/\min_{i\leq N}\pi_i$, when data are sampled by means of a rejective scheme of (deterministic) size $n$ within a statistical population of cardinality $N\geq n$, a generalization of basic {\it sampling without replacement} with unequal probability weights $\pi_i>0$. Extension to other sampling schemes are then established by a coupling argument. Beyond theoretical results, numerical experiments are displayed in order to show the relevance of HT risk minimization and that ignoring the sampling scheme used to generate the training dataset may completely jeopardize the learning procedure.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.