Papers
Topics
Authors
Recent
Search
2000 character limit reached

On Markov Decision Processes with Borel Spaces and an Average Cost Criterion

Published 10 Jan 2019 in math.OC | (1901.03374v1)

Abstract: We consider average-cost Markov decision processes (MDPs) with Borel state and action spaces and universally measurable policies. For the nonnegative cost model and an unbounded cost model, we introduce a set of conditions under which we prove the average cost optimality inequality (ACOI) via the vanishing discount factor approach. Unlike most existing results on the ACOI, which require compactness/continuity conditions on the MDP, our result does not and can be applied to problems with discontinuous dynamics and one-stage costs. The key idea here is to replace the compactness/continuity conditions used in the prior work by what we call majorization type conditions. In particular, among others, we require that for each state, on selected subsets of actions at that state, the state transition stochastic kernel is majorized by finite measures, and we use this majorization property together with Egoroff's theorem to prove the ACOI. We also consider the minimum pair approach for average-cost MDPs and apply the majorization idea. For the case of a discrete action space and strictly unbounded costs, we prove the existence of a minimum pair that consists of a stationary policy and an invariant probability measure induced by the policy. This result is derived by combining Lusin's theorem with another majorization condition we introduce, and it can be applied to a class of countable action space MDPs in which, with respect to the state variable, the dynamics and one-stage costs are discontinuous.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.