Papers
Topics
Authors
Recent
Search
2000 character limit reached

Robust Mean Field Social Control: A Unified Reinforcement Learning Framework

Published 27 Feb 2025 in eess.SY and cs.SY | (2502.20029v1)

Abstract: This paper studies linear quadratic Gaussian robust mean field social control problems in the presence of multiplicative noise. We aim to compute asymptotic decentralized strategies without requiring full prior knowledge of agents' dynamics. The primary challenges lie in solving an indefinite stochastic algebraic Riccati equation for feedback gains, and an indefinite algebraic Riccati equation for feedforward gains. To overcome these challenges, we first propose a unified dual-loop iterative framework that simultaneously handles both indefinite Riccati-type equations, and provide rigorous convergence proofs for both outer-loop and inner-loop iterations. Second, recognizing that biases may arise in iterative processes due to estimation and modeling errors, we analyze the robustness of the proposed algorithm by using the small-disturbance input-to-state stability technique. This ensures convergence to a neighborhood of the optimal solution, even in the existence of disturbances. Finally, to address the limitation of requiring precise knowledge of agents' dynamics, we employ the integral reinforcement learning technique to develop a data-driven method within the dual-loop iterative framework. A numerical example is provided to demonstrate the effectiveness of the proposed algorithm.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.