Mobile Edge Generation: A New Era to 6G
Abstract: A conception of mobile edge generation (MEG) is proposed, where generative artificial intelligence (GAI) models are distributed at edge servers (ESs) and user equipment (UE), enabling joint execution of generation tasks. Various distributed deployment schemes of the GAI model are proposed to alleviate the immense network load and long user queuing times for accessing GAI models. Two MEG frameworks are proposed, namely the single-ES framework and the multi-ESs framework. 1) A one-to-one joint generation framework between an ES and a UE is proposed, including four specific single-ES MEG protocols. These protocols allow distributed GAI models to transmit seeds or sketches for delivering information efficiently. 2) Several protocols are proposed for multi-ESs MEG, which enable multiple ESs to perform the generation task cooperatively or in parallel. Finally, a case study of a text-guided-image-to-image generation is provided, where a latent diffusion model is distributed at an ES and a UE. The simulation results demonstrate that the proposed protocols are able to generate high-quality images at extremely low signal-to-noise ratios. The proposed protocols can significantly reduce the communication overhead compared to the centralized model.
- M. Xu, H. Du, D. Niyato, J. Kang, Z. Xiong, S. Mao, Z. Han, A. Jamalipour, D. I. Kim, V. Leung et al., “Unleashing the power of edge-cloud generative ai in mobile networks: A survey of aigc services,” arXiv preprint arXiv:2303.16129, 2023.
- OpenAI, “GPT-4 technical report,” 2023.
- J. Chen, C. Yi, H. Du, D. Niyato, J. Kang, J. Cai et al., “A revolution of personalized healthcare: enabling human digital twin with mobile aigc,” arXiv preprint arXiv:2307.12115, 2023.
- E.-K. Hong, I. Lee, B. Shim, Y.-C. Ko, S.-H. Kim, S. Pack, K. Lee, S. Kim, J.-H. Kim, Y. Shin, Y. Kim, and H. Jung, “6G R&D vision: Requirements and candidate technologies,” Journal of Communications and Networks, vol. 24, no. 2, pp. 232–245, 2022.
- Y. Liu, Z. Qin, M. Elkashlan, Z. Ding, A. Nallanathan, and L. Hanzo, “Nonorthogonal multiple access for 5G and beyond,” Proceedings of the IEEE, vol. 105, no. 12, pp. 2347–2381, 2017.
- C. Zhang, C. Zhang, S. Zheng, Y. Qiao, C. Li, M. Zhang, S. K. Dam, C. M. Thwal, Y. L. Tun, L. L. Huy et al., “A complete survey on generative ai (AIGC): Is chatgpt from gpt-4 to gpt-5 all you need?” arXiv preprint arXiv:2303.11717, 2023.
- J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” Advances in neural information processing systems, vol. 33, pp. 6840–6851, 2020.
- Y. Guo, C. Yang, A. Rao, Y. Wang, Y. Qiao, D. Lin, and B. Dai, “Animatediff: Animate your personalized text-to-image diffusion models without specific tuning,” arXiv preprint arXiv:2307.04725, 2023.
- M. Zong and B. Krishnamachari, “A survey on GPT-3,” arXiv preprint arXiv:2212.00857, 2022.
- H. Du, R. Zhang, D. Niyato, J. Kang, Z. Xiong, D. I. Kim, X. S. Shen, and H. V. Poor, “Exploring collaborative distributed diffusion-based AI-generated content (AIGC) in wireless networks,” IEEE Network, pp. 1–8, 2023.
- J. Wang, H. Du, D. Niyato, J. Kang, Z. Xiong, D. Rajan, S. Mao et al., “A unified framework for guiding generative ai with wireless perception in resource constrained mobile edge networks,” arXiv preprint arXiv:2309.01426, 2023.
- A. Leong. (2023) Here’s why ChatGPT might be ’at capacity’ for you still. [Online]. Available: https://www.digitaltrends.com/computing/chatgpt-is-at-capacity-and-is-frustrating-new-people-everywhere/
- A. Coman and H. Munoz-Avila, “Generating diverse plans using quantitative and qualitative plan distance metrics,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 25, no. 1, 2011, pp. 946–951.
- R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-resolution image synthesis with latent diffusion models,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2022, pp. 10 684–10 695.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.