Papers
Topics
Authors
Recent
Search
2000 character limit reached

Fusion of Mixture of Experts and Generative Artificial Intelligence in Mobile Edge Metaverse

Published 4 Apr 2024 in cs.NI | (2404.03321v1)

Abstract: In the digital transformation era, Metaverse offers a fusion of virtual reality (VR), augmented reality (AR), and web technologies to create immersive digital experiences. However, the evolution of the Metaverse is slowed down by the challenges of content creation, scalability, and dynamic user interaction. Our study investigates an integration of Mixture of Experts (MoE) models with Generative Artificial Intelligence (GAI) for mobile edge computing to revolutionize content creation and interaction in the Metaverse. Specifically, we harness an MoE model's ability to efficiently manage complex data and complex tasks by dynamically selecting the most relevant experts running various sub-models to enhance the capabilities of GAI. We then present a novel framework that improves video content generation quality and consistency, and demonstrate its application through case studies. Our findings underscore the efficacy of MoE and GAI integration to redefine virtual experiences by offering a scalable, efficient pathway to harvest the Metaverse's full potential.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (15)
  1. M. Xu, H. Du, D. Niyato, J. Kang, Z. Xiong, S. Mao, Z. Han, A. Jamalipour, D. I. Kim, X. Shen, V. Leung, and H. V. Poor, “Unleashing the power of edge-cloud Generative AI in mobile networks: A survey of AIGC services,” IEEE Communications Surveys & Tutorials, to appear, 2024.
  2. H. X. Qin and P. Hui, “Empowering the Metaverse with generative ai: Survey and future directions,” in Proceedings of the IEEE 43rd International Conference on Distributed Computing Systems Workshops (ICDCSW), Hong Kong, China, July 2023, pp. 85–90.
  3. D. K. Murala and S. K. Panda, “Artificial intelligence in the development of Metaverse,” in Metaverse and Immersive Technologies: An Introduction to Industrial, Business and Social Applications.   Wiley Online Library, 2023, pp. 407–436.
  4. S. Dhelim, T. Kechadi, L. Chen, N. Aung, H. Ning, and L. Atzori, “Edge-enabled Metaverse: The convergence of Metaverse and mobile edge computing,” arXiv preprint arXiv: https://arxiv.org/abs/2205.02764, 2022.
  5. M. Xu, W. C. Ng, W. Y. B. Lim, J. Kang, Z. Xiong, D. Niyato, Q. Yang, X. Shen, and C. Miao, “A full dive into realizing the Edge-Enabled Metaverse: Visions, enabling technologies, and challenges,” IEEE Communications Surveys & Tutorials, vol. 25, no. 1, pp. 656–700, Nov. 2023.
  6. Z. Lv, “Generative artificial intelligence in the metaverse era,” Cognitive Robotics, vol. 3, pp. 208–217, 2023. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S2667241323000198
  7. X. Bai, X. Zhang, H. Zhang, and H. Huang, “Perceptual loss function for speech enhancement based on generative adversarial learning,” in Proceedings of the Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Chiang Mai, Thailand, 2022, pp. 53–58.
  8. N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. Le, G. Hinton, and J. Dean, “Outrageously large neural networks: The sparsely-gated mixture-of-experts layer,” arXiv preprint arXiv: https://arxiv.org/abs/1701.06538, 2017.
  9. J. Ma, Z. Zhao, X. Yi, J. Chen, L. Hong, and E. H. Chi, “Modeling task relationships in multi-task learning with multi-gate mixture-of-experts,” in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, ser. KDD ’18.   New York, NY, USA: Association for Computing Machinery, 2018, p. 1930–1939. [Online]. Available: https://doi.org/10.1145/3219819.3220007
  10. S. Rajbhandari, C. Li, Z. Yao, M. Zhang, R. Y. Aminabadi, A. A. Awan, J. Rasley, and Y. He, “DeepSpeed-MoE: Advancing mixture-of-experts inference and training to power next-generation AI scale,” in Proceedings of the 39th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, vol. 162.   PMLR, Jul 2022, pp. 18 332–18 346. [Online]. Available: https://proceedings.mlr.press/v162/rajbhandari22a.html
  11. H. liang, Z. Fan, R. Sarkar, Z. Jiang, T. Chen, K. Zou, Y. Cheng, C. Hao, and Z. Wang, “M33{}^{3}start_FLOATSUPERSCRIPT 3 end_FLOATSUPERSCRIPTvit: Mixture-of-experts vision transformer for efficient multi-task learning with model-accelerator co-design,” in Advances in Neural Information Processing Systems, 2022. [Online]. Available: https://openreview.net/forum?id=cFOhdl1cyU-
  12. Y. Shi, B. Paige, P. Torr et al., “Variational mixture-of-experts autoencoders for multi-modal deep generative models,” in Advances in Neural Information Processing Systems, vol. 32, 2019, pp. 15 692–15 703.
  13. B. Cao, Y. Sun, P. Zhu, and Q. Hu, “Multi-modal gated mixture of local-to-global experts for dynamic image fusion,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 23 555–23 564.
  14. L. Khachatryan, A. Movsisyan, V. Tadevosyan, R. Henschel, Z. Wang, S. Navasardyan, and H. Shi, “Text2video-zero: Text-to-image diffusion models are zero-shot video generators,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France., October 2023.
  15. Z. Huang, Y. He, J. Yu, F. Zhang, C. Si, Y. Jiang, Y. Zhang, T. Wu, Q. Jin, N. Chanpaisit et al., “Vbench: Comprehensive benchmark suite for video generative models,” arXiv preprint arXiv: https://arxiv.org/abs/2311.17982, 2023.
Citations (1)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 1 like about this paper.