MAexp: A Generic Platform for RL-based Multi-Agent Exploration
Abstract: The sim-to-real gap poses a significant challenge in RL-based multi-agent exploration due to scene quantization and action discretization. Existing platforms suffer from the inefficiency in sampling and the lack of diversity in Multi-Agent Reinforcement Learning (MARL) algorithms across different scenarios, restraining their widespread applications. To fill these gaps, we propose MAexp, a generic platform for multi-agent exploration that integrates a broad range of state-of-the-art MARL algorithms and representative scenarios. Moreover, we employ point clouds to represent our exploration scenarios, leading to high-fidelity environment mapping and a sampling speed approximately 40 times faster than existing platforms. Furthermore, equipped with an attention-based Multi-Agent Target Generator and a Single-Agent Motion Planner, MAexp can work with arbitrary numbers of agents and accommodate various types of robots. Extensive experiments are conducted to establish the first benchmark featuring several high-performance MARL algorithms across typical scenarios for robots with continuous actions, which highlights the distinct strengths of each algorithm in different scenarios.
- Y. Liu and G. Nejat, “Multirobot cooperative learning for semiautonomous control in urban search and rescue applications,” Journal of Field Robotics, vol. 33, no. 4, pp. 512–536, 2016.
- A. Fascista, “Toward integrated large-scale environmental monitoring using wsn/uav/crowdsensing: A review of applications, signal processing, and future perspectives,” Sensors, vol. 22, no. 5, p. 1824, 2022.
- J. Alonso-Mora, S. Baker, and D. Rus, “Multi-robot formation control and object transport in dynamic environments via constrained optimization,” The International Journal of Robotics Research, vol. 36, no. 9, pp. 1000–1021, 2017.
- Y. Mei, Y.-H. Lu, C. G. Lee, and Y. C. Hu, “Energy-efficient mobile robot exploration,” in Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006. IEEE, 2006, pp. 505–511.
- S. Oßwald, M. Bennewitz, W. Burgard, and C. Stachniss, “Speeding-up robot exploration by exploiting background information,” IEEE Robotics and Automation Letters, vol. 1, no. 2, pp. 716–723, 2016.
- H. Umari and S. Mukhopadhyay, “Autonomous robotic exploration based on multiple rapidly-exploring randomized trees,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017, pp. 1396–1402.
- J. Yu, J. Tong, Y. Xu, Z. Xu, H. Dong, T. Yang, and Y. Wang, “Smmr-explore: Submap-based multi-robot exploration system with multi-robot multi-target potential field exploration method,” in 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021, pp. 8779–8785.
- C. S. de Witt, T. Gupta, D. Makoviichuk, V. Makoviychuk, P. H. Torr, M. Sun, and S. Whiteson, “Is independent learning all you need in the starcraft multi-agent challenge?” arXiv preprint arXiv:2011.09533, 2020.
- O. Vinyals, T. Ewalds, S. Bartunov, P. Georgiev, A. S. Vezhnevets, M. Yeo, A. Makhzani, H. Küttler, J. Agapiou, J. Schrittwieser et al., “Starcraft ii: A new challenge for reinforcement learning,” arXiv preprint arXiv:1708.04782, 2017.
- S. Hu, Y. Zhong, M. Gao, W. Wang, H. Dong, Z. Li, X. Liang, X. Chang, and Y. Yang, “Marllib: Extending rllib for multi-agent reinforcement learning,” arXiv preprint arXiv:2210.13708, 2022.
- C. Yu, A. Velu, E. Vinitsky, J. Gao, Y. Wang, A. Bayen, and Y. Wu, “The surprising effectiveness of ppo in cooperative multi-agent games,” Advances in Neural Information Processing Systems, vol. 35, pp. 24 611–24 624, 2022.
- M. Samvelyan, T. Rashid, C. S. De Witt, G. Farquhar, N. Nardelli, T. G. Rudner, C.-M. Hung, P. H. Torr, J. Foerster, and S. Whiteson, “The starcraft multi-agent challenge,” arXiv preprint arXiv:1902.04043, 2019.
- R. Lowe, Y. I. Wu, A. Tamar, J. Harb, O. Pieter Abbeel, and I. Mordatch, “Multi-agent actor-critic for mixed cooperative-competitive environments,” Advances in neural information processing systems, vol. 30, 2017.
- B. Baker, I. Kanitscheider, T. Markov, Y. Wu, G. Powell, B. McGrew, and I. Mordatch, “Emergent tool use from multi-agent autocurricula,” arXiv preprint arXiv:1909.07528, 2019.
- C. Yu, X. Yang, J. Gao, H. Yang, Y. Wang, and Y. Wu, “Learning efficient multi-agent cooperative visual exploration,” in European Conference on Computer Vision. Springer, 2022, pp. 497–515.
- H. Wang, W. Wang, X. Zhu, J. Dai, and L. Wang, “Collaborative visual navigation,” arXiv preprint arXiv:2107.01151, 2021.
- A. Mete, M. Mouhoub, and A. M. Farid, “Coordinated multi-robot exploration using reinforcement learning,” in 2023 International Conference on Unmanned Aircraft Systems (ICUAS). IEEE, 2023, pp. 265–272.
- A. H. Tan, F. P. Bejarano, Y. Zhu, R. Ren, and G. Nejat, “Deep reinforcement learning for decentralized multi-robot exploration with macro actions,” IEEE Robotics and Automation Letters, vol. 8, no. 1, pp. 272–279, 2022.
- Z. Chen, B. Subagdja, and A.-H. Tan, “End-to-end deep reinforcement learning for multi-agent collaborative exploration,” in 2019 IEEE International Conference on Agents (ICA). IEEE, 2019, pp. 99–102.
- J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.
- V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu, “Asynchronous methods for deep reinforcement learning,” in International conference on machine learning. PMLR, 2016, pp. 1928–1937.
- J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz, “Trust region policy optimization,” in International conference on machine learning. PMLR, 2015, pp. 1889–1897.
- J. G. Kuba, R. Chen, M. Wen, Y. Wen, F. Sun, J. Wang, and Y. Yang, “Trust region policy optimisation in multi-agent reinforcement learning,” arXiv preprint arXiv:2109.11251, 2021.
- J. Su, S. Adams, and P. Beling, “Value-decomposition multi-agent actor-critics,” in Proceedings of the AAAI conference on artificial intelligence, vol. 35, no. 13, 2021, pp. 11 352–11 360.
- Y. Ma and J. Luo, “Value-decomposition multi-agent proximal policy optimization,” in 2022 China Automation Congress (CAC). IEEE, 2022, pp. 3460–3464.
- B. Peng, T. Rashid, C. Schroeder de Witt, P.-A. Kamienny, P. Torr, W. Böhmer, and S. Whiteson, “Facmac: Factored multi-agent centralised policy gradients,” Advances in Neural Information Processing Systems, vol. 34, pp. 12 208–12 221, 2021.
- M. Geng, K. Xu, X. Zhou, B. Ding, H. Wang, and L. Zhang, “Learning to cooperate via an attention-based communication neural network in decentralized multi-robot exploration,” Entropy, vol. 21, no. 3, p. 294, 2019.
- D. He, D. Feng, H. Jia, and H. Liu, “Decentralized exploration of a structured environment based on multi-agent deep reinforcement learning,” in 2020 IEEE 26th International Conference on Parallel and Distributed Systems (ICPADS). IEEE, 2020, pp. 172–179.
- H. Zhang, J. Cheng, L. Zhang, Y. Li, and W. Zhang, “H2gnn: hierarchical-hops graph neural networks for multi-robot exploration in unknown environments,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 3435–3442, 2022.
- Y. Xu, J. Yu, J. Tang, J. Qiu, J. Wang, Y. Shen, Y. Wang, and H. Yang, “Explore-bench: Data sets, metrics and evaluations for frontier-based and deep-reinforcement-learning-based autonomous exploration,” in 2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022, pp. 6225–6231.
- A. Szot, A. Clegg, E. Undersander, E. Wijmans, Y. Zhao, J. Turner, N. Maestre, M. Mukadam, D. Chaplot, O. Maksymets, A. Gokaslan, V. Vondrus, S. Dharur, F. Meier, W. Galuba, A. Chang, Z. Kira, V. Koltun, J. Malik, M. Savva, and D. Batra, “Habitat 2.0: Training home assistants to rearrange their habitat,” in Advances in Neural Information Processing Systems (NeurIPS), 2021.
- M. Chen, Q. Hu, Z. Yu, H. THOMAS, A. Feng, Y. Hou, K. McCullough, F. Ren, and L. Soibelman, “Stpls3d: A large-scale synthetic and real aerial photogrammetry 3d point cloud dataset,” in 33rd British Machine Vision Conference 2022, BMVC 2022, London, UK, November 21-24, 2022. BMVA Press, 2022. [Online]. Available: https://bmvc2022.mpi-inf.mpg.de/0429.pdf
- A. Sadek, G. Bono, B. Chidlovskii, A. Baskurt, and C. Wolf, “Multi-object navigation in real environments using hybrid policies,” in 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023, pp. 4085–4091.
- K. Nakhleh, M. Raza, M. Tang, M. Andrews, R. Boney, I. Hadžić, J. Lee, A. Mohajeri, and K. Palyutina, “Sacplanner: Real-world collision avoidance with a soft actor critic local planner and polar state representations,” in 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023, pp. 9464–9470.
- Y. Shu, Z. Li, B. Karlsson, Y. Lin, T. Moscibroda, and K. Shin, “Incrementally-deployable indoor navigation with automatic trace generation,” in IEEE INFOCOM 2019-IEEE Conference on Computer Communications. IEEE, 2019, pp. 2395–2403.
- Z. Zhang, S. He, Y. Shu, and Z. Shi, “A self-evolving wifi-based indoor navigation system using smartphones,” IEEE Transactions on Mobile Computing, vol. 19, no. 8, pp. 1760–1774, 2019.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.