Linear Algebraic Truncation Algorithm with A Posteriori Error Bounds for Computing Markov Chain Equilibrium Gradients
Abstract: The numerical computation of equilibrium reward gradients for Markov chains appears in many applications for example within the policy improvement step arising in connection with average reward stochastic dynamic programming. When the state space is large or infinite, one will typically need to truncate the state space in order to arrive at a numerically tractable formulation. In this paper, we derive the first computable a posteriori error bounds for equilibrium reward gradients that account for the error induced by the truncation. Our approach uses regeneration to express equilibrium quantities in terms of the expectations of cumulative rewards over regenerative cycles. Lyapunov functions are then used to bound the contributions to these cumulative rewards and their gradients from path excursions that take the chain outside the truncation set. Our numerical results indicate that our approach can provide highly accurate bounds with truncation sets of moderate size. We further extend our approach to Markov jump processes.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.