Papers
Topics
Authors
Recent
Search
2000 character limit reached

An Arcsine Law for Markov Random Walks

Published 1 Mar 2017 in math.PR | (1703.00316v3)

Abstract: The classic arcsine law for the number $N_{n}{>}:=n{-1}\sum_{k=1}{n}\mathbf{1}{{S{k}>0}}$ of positive terms, as $n\to\infty$, in an ordinary random walk $(S_{n}){n\ge 0}$ is extended to the case when this random walk is governed by a positive recurrent Markov chain $(M{n}){n\ge 0}$ on a countable state space $\mathcal{S}$, that is, for a Markov random walk $(M{n},S_{n}){n\ge 0}$ with positive recurrent discrete driving chain. More precisely, it is shown that $n{-1}N{n}{>}$ converges in distribution to a generalized arcsine law with parameter $\rho\in [0,1]$ (the classic arcsine law if $\rho=1/2$) iff the Spitzer condition $$ \lim_{n\to\infty}\frac{1}{n}\sum_{k=1}{n}\mathbb{P}{i}(S{n}>0)\ =\ \rho $$ holds true for some and then all $i\in\mathcal{S}$, where $\mathbb{P}{i}:=\mathbb{P}(\cdot|M{0}=i)$ for $i\in\mathcal{S}$. It is also proved, under an extra assumption on the driving chain if $0<\rho<1$, that this condition is equivalent to the stronger variant $$ \lim_{n\to\infty}\mathbb{P}{i}(S{n}>0)\ =\ \rho. $$ For an ordinary random walk, this was shown by Doney for $0<\rho<1$ and by Bertoin and Doney for $\rho\in{0,1}$.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.