Papers
Topics
Authors
Recent
Search
2000 character limit reached

Distributional Soft Actor-Critic with Three Refinements

Published 9 Oct 2023 in cs.LG, cs.SY, and eess.SY | (2310.05858v5)

Abstract: Reinforcement learning (RL) has shown remarkable success in solving complex decision-making and control tasks. However, many model-free RL algorithms experience performance degradation due to inaccurate value estimation, particularly the overestimation of Q-values, which can lead to suboptimal policies. To address this issue, we previously proposed the Distributional Soft Actor-Critic (DSAC or DSACv1), an off-policy RL algorithm that enhances value estimation accuracy by learning a continuous Gaussian value distribution. Despite its effectiveness, DSACv1 faces challenges such as training instability and sensitivity to reward scaling, caused by high variance in critic gradients due to return randomness. In this paper, we introduce three key refinements to DSACv1 to overcome these limitations and further improve Q-value estimation accuracy: expected value substitution, twin value distribution learning, and variance-based critic gradient adjustment. The enhanced algorithm, termed DSAC with Three refinements (DSAC-T or DSACv2), is systematically evaluated across a diverse set of benchmark tasks. Without the need for task-specific hyperparameter tuning, DSAC-T consistently matches or outperforms leading model-free RL algorithms, including SAC, TD3, DDPG, TRPO, and PPO, in all tested environments. Additionally, DSAC-T ensures a stable learning process and maintains robust performance across varying reward scales. Its effectiveness is further demonstrated through real-world application in controlling a wheeled robot, highlighting its potential for deployment in practical robotic tasks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (26)
  1. S. E. Li, Reinforcement Learning for Sequential Decision and Optimal Control. Springer Verlag, Singapore, 2023.
  2. D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al., “Mastering the game of Go with deep neural networks and tree search,” Nature, vol. 529, no. 7587, p. 484, 2016.
  3. D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, et al., “Mastering the game of go without human knowledge,” Nature, vol. 550, no. 7676, p. 354, 2017.
  4. V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, p. 529, 2015.
  5. T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” in 4th International Conference on Learning Representations (ICLR 2016), (San Juan, Puerto Rico), 2016.
  6. H. van Hasselt, “Double Q-learning,” in 23rd Advances in Neural Information Processing Systems (NeurIPS 2010), (Vancouver, British Columbia, Canada), pp. 2613–2621, 2010.
  7. H. van Hasselt, A. Guez, and D. Silver, “Deep reinforcement learning with double Q-learning,” in Proceedings of the 30th Conference on Artificial Intelligence (AAAI 2016), (Phoenix, Arizona,USA), pp. 2094–2100, 2016.
  8. S. Fujimoto, H. van Hoof, and D. Meger, “Addressing function approximation error in actor-critic methods,” in Proceedings of the 35th International Conference on Machine Learning (ICML 2018), (Stockholmsmässan, Stockholm Sweden), pp. 1587–1596, PMLR, 2018.
  9. T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, “Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor,” in Proceedings of the 35th International Conference on Machine Learning (ICML 2018), (Stockholmsmässan, Stockholm Sweden), pp. 1861–1870, PMLR, 2018.
  10. T. Haarnoja, A. Zhou, K. Hartikainen, G. Tucker, S. Ha, J. Tan, V. Kumar, H. Zhu, A. Gupta, P. Abbeel, et al., “Soft actor-critic algorithms and applications,” arXiv preprint arXiv:1812.05905, 2018.
  11. J. Duan, Y. Guan, S. E. Li, Y. Ren, Q. Sun, and B. Cheng, “Distributional soft actor-critic: Off-policy reinforcement learning for addressing value estimation errors,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 11, pp. 6584–6598, 2021.
  12. Y. Guan, S. E. Li, J. Duan, J. Li, Y. Ren, Q. Sun, and B. Cheng, “Direct and indirect reinforcement learning,” International Journal of Intelligent Systems, vol. 36, no. 8, pp. 4439–4467, 2021.
  13. W. Wang, Y. Zhang, J. Gao, Y. Jiang, Y. Yang, Z. Zheng, W. Zou, J. Li, C. Zhang, W. Cao, et al., “GOPS: A general optimal control problem solver for autonomous driving and industrial control applications,” Communications in Transportation Research, vol. 3, p. 100096, 2023.
  14. T. Haarnoja, H. Tang, P. Abbeel, and S. Levine, “Reinforcement learning with deep energy-based policies,” in Proceedings of the 34th International Conference on Machine Learning, (ICML 2017), (Sydney, NSW, Australia), pp. 1352–1361, PMLR, 2017.
  15. Y. Ren, J. Duan, S. E. Li, Y. Guan, and Q. Sun, “Improving generalization of reinforcement learning with minimax distributional soft actor-critic,” in 23rd IEEE International Conference on Intelligent Transportation Systems (IEEE ITSC 2020), (Rhodes, Greece), IEEE, 2020.
  16. J. Schulman, S. Levine, P. Abbeel, M. I. Jordan, and P. Moritz, “Trust region policy optimization,” in Proceedings of the 32nd International Conference on Machine Learning, (ICML 2015), (Lille, France), pp. 1889–1897, 2015.
  17. J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.
  18. M. G. Bellemare, W. Dabney, and R. Munos, “A distributional perspective on reinforcement learning,” in Proceedings of the 34th International Conference on Machine Learning, (ICML 2017), (Sydney, NSW, Australia), pp. 449–458, PMLR, 2017.
  19. W. Dabney, M. Rowland, M. G. Bellemare, and R. Munos, “Distributional reinforcement learning with quantile regression,” in Proceedings of the 32nd Conference on Artificial Intelligence, (AAAI 2018), (New Orleans, Louisiana, USA), pp. 2892–2901, 2018.
  20. W. Dabney, G. Ostrovski, D. Silver, and R. Munos, “Implicit quantile networks for distributional reinforcement learning,” in Proceedings of the 35th International Conference on Machine Learning (ICML 2018), (Stockholmsmässan, Stockholm Sweden), pp. 1096–1105, PMLR, 2018.
  21. D. Yang, L. Zhao, Z. Lin, T. Qin, J. Bian, and T.-Y. Liu, “Fully parameterized quantile function for distributional reinforcement learning,” Advances in Neural Information Processing Systems, vol. 32, 2019.
  22. M. Rowland, R. Dadashi, S. Kumar, R. Munos, M. G. Bellemare, and W. Dabney, “Statistics and samples in distributional reinforcement learning,” in Proceedings of the 36th International Conference on Machine Learning, (ICML 2019), (Long Beach, CA, USA), pp. 5528–5536, PMLR, 2019.
  23. B. Mavrin, H. Yao, L. Kong, K. Wu, and Y. Yu, “Distributional reinforcement learning for efficient exploration,” in Proceedings of the 36th International Conference on Machine Learning, (ICML 2019), (Long Beach, CA, USA), pp. 4424–4434, PMLR, 2019.
  24. G. Barth-Maron, M. W. Hoffman, D. Budden, W. Dabney, D. Horgan, D. TB, A. Muldal, N. Heess, and T. P. Lillicrap, “Distributed distributional deterministic policy gradients,” in 6th International Conference on Learning Representations, (ICLR 2018), (Vancouver, BC, Canada), 2018.
  25. C. Tessler, G. Tennenholtz, and S. Mannor, “Distributional policy optimization: An alternative approach for continuous control,” Advances in Neural Information Processing Systems, vol. 32, 2019.
  26. W. Dabney, Z. Kurth-Nelson, N. Uchida, C. K. Starkweather, D. Hassabis, R. Munos, and M. Botvinick, “A distributional code for value in dopamine-based reinforcement learning,” Nature, pp. 1–5, 2020.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.