Revisiting Strong Duality, Hidden Convexity, and Gradient Dominance in the Linear Quadratic Regulator
Abstract: The Linear Quadratic Regulator (LQR) is a cornerstone of optimal control theory, widely studied in both model-based and model-free approaches. Despite its well-established nature, certain foundational aspects remain subtle. In this paper, we revisit three key properties of policy optimization in LQR: (i) strong duality in the nonconvex policy optimization formulation, (ii) the gradient dominance property, examining when it holds and when it fails, and (iii) the global optimality of linear static policies. Using primal-dual analysis and convex reformulation, we refine and clarify existing results by leveraging Riccati equations/inequalities, semidefinite programming (SDP) duality, and a recent framework of Extended Convex Lifting (\texttt{ECL}). Our analysis confirms that LQR 1) behaves almost like a convex problem (e.g., strong duality) under the standard assumptions of stabilizability and detectability and 2) exhibits strong convexity-like properties (e.g., gradient dominance) under slightly stronger conditions. In particular, we establish a broader characterization under which gradient dominance holds using \texttt{ECL} and the notion of Cauchy directions. By clarifying and refining these theoretical insights, we hope this work contributes to a deeper understanding of LQR and may inspire further developments beyond LQR.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.