Task-agnostic Decision Transformer for Multi-type Agent Control with Federated Split Training
Abstract: With the rapid advancements in artificial intelligence, the development of knowledgeable and personalized agents has become increasingly prevalent. However, the inherent variability in state variables and action spaces among personalized agents poses significant aggregation challenges for traditional federated learning algorithms. To tackle these challenges, we introduce the Federated Split Decision Transformer (FSDT), an innovative framework designed explicitly for AI agent decision tasks. The FSDT framework excels at navigating the intricacies of personalized agents by harnessing distributed data for training while preserving data privacy. It employs a two-stage training process, with local embedding and prediction models on client agents and a global transformer decoder model on the server. Our comprehensive evaluation using the benchmark D4RL dataset highlights the superior performance of our algorithm in federated split learning for personalized agents, coupled with significant reductions in communication and computational overhead compared to traditional centralized training approaches. The FSDT framework demonstrates strong potential for enabling efficient and privacy-preserving collaborative learning in applications such as autonomous driving decision systems. Our findings underscore the efficacy of the FSDT framework in effectively leveraging distributed offline reinforcement learning data to enable powerful multi-type agent decision systems.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
- M. Wen, R. Lin, H. Wang, Y. Yang, Y. Wen, L. Mai, J. Wang, H. Zhang, and W. Zhang, “Large sequence models for sequential decision-making: a survey,” Frontiers of Computer Science, vol. 17, no. 6, p. 176349, 2023.
- X. Pan, B. Li, W. Wang, J. Yi, X. Zhang, and D. Song, “How you act tells a lot: Privacy-leaking attack on deep reinforcement learning,” in International Conference on Autonomous Agents and Multiagent Systems, 2019, pp. 368–376.
- J. M. Such, A. Espinosa, and A. García-Fornes, “A survey of privacy in multi-agent systems,” The Knowledge Engineering Review, vol. 29, no. 3, pp. 314–344, 2014.
- L. Hebert, L. Golab, P. Poupart, and R. Cohen, “Fedformer: Contextual federation with attention in reinforcement learning,” in International Conference on Autonomous Agents and Multiagent Systems, 2023, pp. 810–818.
- Y. Lei, D. Ye, S. Shen, Y. Sui, T. Zhu, and W. Zhou, “New challenges in reinforcement learning: a survey of security and privacy,” Artificial Intelligence Review, vol. 56, no. 7, pp. 7195–7236, 2023.
- L. Chen, K. Lu, A. Rajeswaran, K. Lee, A. Grover, M. Laskin, P. Abbeel, A. Srinivas, and I. Mordatch, “Decision transformer: Reinforcement learning via sequence modeling,” Advances in neural information processing systems, vol. 34, pp. 15 084–15 097, 2021.
- K.-H. Lee, O. Nachum, M. S. Yang, L. Lee, D. Freeman, S. Guadarrama, I. Fischer, W. Xu, E. Jang, H. Michalewski et al., “Multi-game decision transformers,” Advances in Neural Information Processing Systems, vol. 35, pp. 27 921–27 936, 2022.
- Q. Zheng, A. Zhang, and A. Grover, “Online decision transformer,” in International Conference on Machine Learning. PMLR, 2022, pp. 27 042–27 059.
- M. Xu, Y. Lu, Y. Shen, S. Zhang, D. Zhao, and C. Gan, “Hyper-decision transformer for efficient online policy adaptation,” in International Conference on Learning Representations, 2023.
- Z. Wang, X. Qu, J. Xiao, B. Chen, and J. Wang, “P2dt: Mitigating forgetting in task-incremental learning with progressive prompt decision transformer,” in IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, 2024, pp. 7265–7269.
- B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics. PMLR, 2017, pp. 1273–1282.
- T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated optimization in heterogeneous networks,” Proceedings of Machine learning and systems, vol. 2, pp. 429–450, 2020.
- K. Bonawitz, H. Eichner, W. Grieskamp, D. Huba, A. Ingerman, V. Ivanov, C. Kiddon, J. Konečnỳ, S. Mazzocchi, B. McMahan et al., “Towards federated learning at scale: System design,” Proceedings of machine learning and systems, vol. 1, pp. 374–388, 2019.
- A. Fallah, A. Mokhtari, and A. Ozdaglar, “Personalized federated learning with theoretical guarantees: A model-agnostic meta-learning approach,” Advances in Neural Information Processing Systems, vol. 33, pp. 3557–3568, 2020.
- V. Smith, C.-K. Chiang, M. Sanjabi, and A. S. Talwalkar, “Federated multi-task learning,” Advances in neural information processing systems, vol. 30, 2017.
- C. Liu, X. Qu, J. Wang, and J. Xiao, “Fedet: a communication-efficient federated class-incremental learning framework based on enhanced transformer,” in Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, 2023, pp. 3984–3992.
- X. Qu, J. Wang, and J. Xiao, “Quantization and knowledge distillation for efficient federated learning on edge devices,” in International Conference on High Performance Computing and Communications. IEEE, 2020, pp. 967–972.
- A. Shysheya, J. F. Bronskill, M. Patacchiola, S. Nowozin, and R. E. Turner, “Fit: Parameter efficient few-shot transfer learning for personalized and federated image classification,” in International Conference on Learning Representations, 2023.
- P. Vepakomma, O. Gupta, T. Swedish, and R. Raskar, “Split learning for health: Distributed deep learning without sharing raw patient data,” arXiv preprint arXiv:1812.00564, 2018.
- W. Wu, M. Li, K. Qu, C. Zhou, X. Shen, W. Zhuang, X. Li, and W. Shi, “Split learning over wireless networks: Parallel design and resource management,” IEEE Journal on Selected Areas in Communications, vol. 41, no. 4, pp. 1051–1066, 2023.
- C. Thapa, P. C. M. Arachchige, S. Camtepe, and L. Sun, “Splitfed: When federated learning meets split learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 8, 2022, pp. 8485–8493.
- A. Abedi and S. S. Khan, “Fedsl: Federated split learning on distributed sequential data in recurrent neural networks,” Multimedia Tools and Applications, pp. 1–21, 2023.
- D. Yao, L. Xiang, H. Xu, H. Ye, and Y. Chen, “Privacy-preserving split learning via patch shuffling over transformers,” in IEEE International Conference on Data Mining (ICDM). IEEE, 2022, pp. 638–647.
- A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever et al., “Language models are unsupervised multitask learners,” OpenAI blog, vol. 1, no. 8, p. 9, 2019.
- T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, “Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor,” in International conference on machine learning. PMLR, 2018, pp. 1861–1870.
- L. Qu, Y. Zhou, P. P. Liang, Y. Xia, F. Wang, E. Adeli, L. Fei-Fei, and D. Rubin, “Rethinking architecture design for tackling data heterogeneity in federated learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 10 061–10 071.
- S. Park and J. C. Ye, “Multi-task distributed learning using vision transformer with random patch permutation,” IEEE Transactions on Medical Imaging, 2022.
- S. Park, G. Kim, J. Kim, B. Kim, and J. C. Ye, “Federated split task-agnostic vision transformer for covid-19 cxr diagnosis,” Advances in Neural Information Processing Systems, vol. 34, pp. 24 617–24 630, 2021.
- A. Kumar, A. Zhou, G. Tucker, and S. Levine, “Conservative q-learning for offline reinforcement learning,” Advances in Neural Information Processing Systems, vol. 33, pp. 1179–1191, 2020.
- A. Kumar, J. Fu, M. Soh, G. Tucker, and S. Levine, “Stabilizing off-policy q-learning via bootstrapping error reduction,” Advances in Neural Information Processing Systems, vol. 32, 2019.
- Y. Wu, G. Tucker, and O. Nachum, “Behavior regularized offline reinforcement learning,” arXiv preprint arXiv:1911.11361, 2019.
- X. B. Peng, A. Kumar, G. Zhang, and S. Levine, “Advantage-weighted regression: Simple and scalable off-policy reinforcement learning,” arXiv preprint arXiv:1910.00177, 2019.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.