Edge Enabled Two Stage Scheduling Based on Deep Reinforcement Learning for Internet of Everything

Edge Enabled Two Stage Scheduling Based on Deep Reinforcement Learning for Internet of Everything

Abstract:

Nowadays, the concept of Internet of Everything (IoE) is becoming a hotly discussed topic, which is playing an increasingly indispensable role in modern intelligent applications. These applications are known for their real-time requirements under limited network and computing resources, thus it becomes a highly demanding task to transform and compute tremendous amount of raw data in a cloud center. The edge–cloud computing infrastructure allows a large amount of data to be processed on nearby edge nodes and then only the extracted and encrypted key features are transmitted to the data center. This offers the potential to achieve an end–edge–cloud-based big data intelligence for IoE in a typical two-stage data processing scheme, while satisfying a data security constraint. In this study, a deep-reinforcement-learning-enhanced two-stage scheduling (DRL-TSS) model is proposed to address the NP-hard problem in terms of operation complexity in end–edge–cloud Internet of Things systems, which is able to allocate computing resources within an edge-enabled infrastructure to ensure computing task to be completed with minimum cost. A presorting scheme based on Johnson’s rule is developed and applied to preprocess the two-stage tasks on multiple executors, and a DRL mechanism is developed to minimize the overall makespan based on a newly designed instant reward that takes into account the maximal utilization of each executor in edge-enabled two-stage scheduling. The performance of our method is evaluated and compared with three existing scheduling techniques, and experimental results demonstrate the ability of our proposed algorithm in achieving better learning efficiency and scheduling performance with a 1.1-approximation to the targeted optimal IoE applications.