Multi Agent Reinforcement Learning Aided Computation Offloading in Aerial Computing for the Internet

Multi Agent Reinforcement Learning Aided Computation Offloading in Aerial Computing for the Internet

Abstract:

LEO satellite networks have become a necessary supplement to terrestrial networks aiming to provide worldwide, ubiquitous connectivity, especially in complicated areas (e.g., mountains, oceans, and disaster areas) where terrestrial network infrastructures are typically sparingly distributed or unavailable. However, the increasing computation-intensive Internet-of-Things (IoT) applications (e.g., real-time remote monitoring, intelligent transportation) require not only efficient and reliable communication but also massive computing capabilities. Constrained by the battery and computing resources, the computing tasks and data of applications have to be transmitted to remote cloud servers. This bandwidth limitation and high transmission delay in LEO networks will reduce the quality-of-service (QoS) of IoT applications. Recently, the combination of LEO networks and edge computing (i.e., Satellite Mobile Edge Computing, SMEC) offers significant opportunities to address these problems. The IoT devices can directly get the computing resources directly from satellites rather than remote servers, thus avoiding long-distance transmission. Considering the resource constraints on satellites, offloading policy plays a crucial role in whole system performance. In this paper, we design a hybrid offloading architecture, which applies a centralized training and distributed execution framework. Also, we propose a multi-agent actor-critic reinforcement learning algorithm, where a centralized “critic” is augmented with the global network state to ease the training procedure of distributed user equipments (UE) by evaluating the benefits of their decisions, while the UEs can adjust their policies according to the critic's evaluation and choose their own decisions relying on their observations.