Papers
Topics
Authors
Recent
Search
2000 character limit reached

Univariate Mean Change Point Detection: Penalization, CUSUM and Optimality

Published 22 Oct 2018 in math.ST, stat.ME, and stat.TH | (1810.09498v4)

Abstract: The problem of univariate mean change point detection and localization based on a sequence of $n$ independent observations with piecewise constant means has been intensively studied for more than half century, and serves as a blueprint for change point problems in more complex settings. We provide a complete characterization of this classical problem in a general framework in which the upper bound $\sigma2$ on the noise variance, the minimal spacing $\Delta$ between two consecutive change points and the minimal magnitude $\kappa$ of the changes, are allowed to vary with $n$. We first show that consistent localization of the change points, when the signal-to-noise ratio $\frac{\kappa \sqrt{\Delta}}{\sigma} < \sqrt{\log(n)}$, is impossible. In contrast, when $\frac{\kappa \sqrt{\Delta}}{\sigma}$ diverges with $n$ at the rate of at least $\sqrt{\log(n)}$, we demonstrate that two computationally-efficient change point estimators, one based on the solution to an $\ell_0$-penalized least squares problem and the other on the popular wild binary segmentation algorithm, are both consistent and achieve a localization rate of the order $\frac{\sigma2}{\kappa2} \log(n)$. We further show that such rate is minimax optimal, up to a $\log(n)$ term.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (3)

Collections

Sign up for free to add this paper to one or more collections.