Papers
Topics
Authors
Recent
Search
2000 character limit reached

Reinforcement Learning for Protocol Synthesis in Resource-Constrained Wireless Sensor and IoT Networks

Published 14 Jan 2023 in cs.NI, cs.AI, and cs.LG | (2302.05300v1)

Abstract: This article explores the concepts of online protocol synthesis using Reinforcement Learning (RL). The study is performed in the context of sensor and IoT networks with ultra low complexity wireless transceivers. The paper introduces the use of RL and Multi Armed Bandit (MAB), a specific type of RL, for Medium Access Control (MAC) under different network and traffic conditions. It then introduces a novel learning based protocol synthesis framework that addresses specific difficulties and limitations in medium access for both random access and time slotted networks. The mechanism does not rely on carrier sensing, network time-synchronization, collision detection, and other low level complex operations, thus making it ideal for ultra simple transceiver hardware used in resource constrained sensor and IoT networks. Additionally, the ability of independent protocol learning by the nodes makes the system robust and adaptive to the changes in network and traffic conditions. It is shown that the nodes can be trained to learn to avoid collisions, and to achieve network throughputs that are comparable to ALOHA based access protocols in sensor and IoT networks with simplest transceiver hardware. It is also shown that using RL, it is feasible to synthesize access protocols that can sustain network throughput at high traffic loads, which is not feasible in the ALOHA-based systems. The ability of the system to provide throughput fairness under network and traffic heterogeneities are also experimentally demonstrated.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.