Deep Reinforcement Learning for Shared Offloading Strategy in Vehicle Edge Computing

Deep Reinforcement Learning for Shared Offloading Strategy in Vehicle Edge Computing

Abstract:

Vehicular edge computing (VEC) effectively reduces the computing load of vehicles by offloading computing tasks from vehicle terminals to edge servers. However, offloading of tasks increase in quantity the transmission time and energy of the network. In order to reduce the computing load of edge servers and improve the system response, a shared offloading strategy based on deep reinforcement learning is proposed for the complex computing environment of Internet of Vehicles (IoVs). The shared offloading strategy exploits the commonality of vehicles task requests, similar computing tasks coming from different vehicles can share the computing results of former task submitted. The shared offloading strategy can be adapted to the complex scenarios of the IoVs. Each vehicle can share the offloading conditions of the VEC servers, and then adaptively select three computing modes: local execution, task offloading, and shared offloading. In this article, the network state and offloading strategy space are the input of the deep reinforcement learning (DRL). Through the DRL, each task unit selects the offloading strategy with the optimal energy consumption at each time period in the dynamic IoVs transmission and computing environment. Compared with the existing proposals and DRL-based algorithms, it can effectively reduce the delay and energy consumption required for tasks offloading.