Economic Model Predictive Control as a Solution to Markov Decision Processes
Abstract: Markov Decision Processes (MDPs) offer a fairly generic and powerful framework to discuss the notion of optimal policies for dynamic systems, in particular when the dynamics are stochastic. However, computing the optimal policy of an MDP can be very difficult due to the curse of dimensionality present in solving the underlying Bellman equations. Model Predictive Control (MPC) is a very popular technique for building control policies for complex dynamic systems. Historically, MPC has focused on constraint satisfaction and steering dynamic systems towards a user-defined reference. More recently, Economic MPC was proposed as a computationally tractable way of building optimal policies for dynamic systems. When stochsaticity is present, economic MPC is close to the MDP framework. In that context, Economic MPC can be construed as attractable heuristic to provide approximate solutions to MDPs. However, there is arguably a knowledge gap in the literature regarding these approximate solutions and the conditions for an MPC scheme to achieve closed-loop optimality. This chapter aims to clarify this approximation pedagogically, to provide the conditions for MPC to deliver optimal policies, and to explore some of their consequences.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.