Papers
Topics
Authors
Recent
Search
2000 character limit reached

Robustness of OLS to sample removals: Theoretical analysis and implications

Published 28 Dec 2025 in math.ST | (2512.23069v1)

Abstract: For learned models to be trustworthy, it is essential to verify their robustness to perturbations in the training data. Classical approaches involve uncertainty quantification via confidence intervals and bootstrap methods. In contrast, recent work proposes a more stringent form of robustness: stability to the removal of any subset of $k$ samples from the training set. In this paper, we present a theoretical study of this criterion for ordinary least squares (OLS). Our contributions are as follows: (1) Given $n$ i.i.d. training samples from a general misspecified model, we prove that with high probability, OLS is robust to the removal of any $k \ll n $ samples. (2) For data of dimension $p$, OLS can withstand up to ${k\ll \sqrt{np}/\log n}$ sample removals while remaining robust and achieving the same error rate as OLS applied to the full dataset. Conversely, if $k$ is proportional to $n$, OLS is provably non-robust. (3) We revisit prior analyses that found several econometric datasets to be highly non-robust to sample removals. While this appears to contradict our results in (1), we demonstrate that the sensitivity is due to either heavy-tailed responses or correlated samples. Empirically, this sensitivity is considerably attenuated by classical robust methods, such as linear regression with a Huber loss.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 2 likes about this paper.