Papers
Topics
Authors
Recent
Search
2000 character limit reached

Surprise probabilities in Markov chains

Published 4 Aug 2014 in math.PR | (1408.0822v1)

Abstract: In a Markov chain started at a state $x$, the hitting time $\tau(y)$ is the first time that the chain reaches another state $y$. We study the probability $\mathbf{P}x(\tau(y) = t)$ that the first visit to $y$ occurs precisely at a given time $t$. Informally speaking, the event that a new state is visited at a large time $t$ may be considered a "surprise". We prove the following three bounds: 1) In any Markov chain with $n$ states, $\mathbf{P}_x(\tau(y) = t) \le \frac{n}{t}$. 2) In a reversible chain with $n$ states, $\mathbf{P}_x(\tau(y) = t) \le \frac{\sqrt{2n}}{t}$ for $t \ge 4n + 4$. 3) For random walk on a simple graph with $n \ge 2$ vertices, $\mathbf{P}_x(\tau(y) = t) \le \frac{4e \log n}{t}$. We construct examples showing that these bounds are close to optimal. The main feature of our bounds is that they require very little knowledge of the structure of the Markov chain. To prove the bound for random walk on graphs, we establish the following estimate conjectured by Aldous, Ding and Oveis-Gharan (private communication): For random walk on an $n$-vertex graph, for every initial vertex $x$, [ \sum_y \left( \sup{t \ge 0} pt(x, y) \right) = O(\log n). ]

Citations (11)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (3)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 2 likes about this paper.