Papers
Topics
Authors
Recent
Search
2000 character limit reached

A model-free first-order method for linear quadratic regulator with $\tilde{O}(1/\varepsilon)$ sampling complexity

Published 30 Nov 2022 in math.OC | (2212.00084v5)

Abstract: We consider the classic stochastic linear quadratic regulator (LQR) problem under an infinite horizon average stage cost. By leveraging recent policy gradient methods from reinforcement learning, we obtain a first-order method that finds a stable feedback law whose objective function gap to the optima is at most $\varepsilon$ with high probability using $\tilde{O}(1/\varepsilon)$ samples, where $\tilde{O}$ hides polylogarithmic dependence on $\varepsilon$. Our proposed method seems to have the best dependence on $\varepsilon$ within the model-free literature without the assumption that all policies generated by the algorithm are stable almost surely, and it matches the best-known rate from the model-based literature, up to logarithmic factors. The improved dependence on $\varepsilon$ is achieved by showing the accuracy scales with the variance rather than the standard deviation of the gradient estimation error. Our developments that result in this improved sampling complexity fall in the category of actor-critic algorithms. The actor part involves a variational inequality formulation of the stochastic LQR problem, while in the critic part, we utilize a conditional stochastic primal-dual method and show that the algorithm has the optimal rate of convergence when paired with a shrinking multi-epoch scheme.

Citations (1)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.