Surprise probabilities in Markov chains
Abstract: In a Markov chain started at a state $x$, the hitting time $\tau(y)$ is the first time that the chain reaches another state $y$. We study the probability $\mathbf{P}x(\tau(y) = t)$ that the first visit to $y$ occurs precisely at a given time $t$. Informally speaking, the event that a new state is visited at a large time $t$ may be considered a "surprise". We prove the following three bounds: 1) In any Markov chain with $n$ states, $\mathbf{P}_x(\tau(y) = t) \le \frac{n}{t}$. 2) In a reversible chain with $n$ states, $\mathbf{P}_x(\tau(y) = t) \le \frac{\sqrt{2n}}{t}$ for $t \ge 4n + 4$. 3) For random walk on a simple graph with $n \ge 2$ vertices, $\mathbf{P}_x(\tau(y) = t) \le \frac{4e \log n}{t}$. We construct examples showing that these bounds are close to optimal. The main feature of our bounds is that they require very little knowledge of the structure of the Markov chain. To prove the bound for random walk on graphs, we establish the following estimate conjectured by Aldous, Ding and Oveis-Gharan (private communication): For random walk on an $n$-vertex graph, for every initial vertex $x$, [ \sum_y \left( \sup{t \ge 0} pt(x, y) \right) = O(\log n). ]
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.