Papers
Topics
Authors
Recent
Search
2000 character limit reached

Green Deep Reinforcement Learning for Radio Resource Management: Architecture, Algorithm Compression and Challenge

Published 11 Oct 2019 in cs.LG, cs.AI, cs.NI, and eess.SP | (1910.05054v1)

Abstract: AI heralds a step-change in the performance and capability of wireless networks and other critical infrastructures. However, it may also cause irreversible environmental damage due to their high energy consumption. Here, we address this challenge in the context of 5G and beyond, where there is a complexity explosion in radio resource management (RRM). On the one hand, deep reinforcement learning (DRL) provides a powerful tool for scalable optimization for high dimensional RRM problems in a dynamic environment. On the other hand, DRL algorithms consume a high amount of energy over time and risk compromising progress made in green radio research. This paper reviews and analyzes how to achieve green DRL for RRM via both architecture and algorithm innovations. Architecturally, a cloud based training and distributed decision-making DRL scheme is proposed, where RRM entities can make lightweight deep local decisions whilst assisted by on-cloud training and updating. On the algorithm level, compression approaches are introduced for both deep neural networks and the underlying Markov Decision Processes, enabling accurate low-dimensional representations of challenges. To scale learning across geographic areas, a spatial transfer learning scheme is proposed to further promote the learning efficiency of distributed DRL entities by exploiting the traffic demand correlations. Together, our proposed architecture and algorithms provide a vision for green and on-demand DRL capability.

Citations (27)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.