An Efficient Distributed Machine Learning Framework in Wireless D2D

An Efficient Distributed Machine Learning Framework in Wireless D2D

Abstract:

Facing the heavy traffic burden and data privacy, distributed machine learning (DML) has been envisioned as a promising computing paradigm to enable edge intelligence by extracting and establishing models collaboratively in wireless networks. Unlike centralized methods, DML involves frequent local model exchanging among distributed devices, which confronts low efficiency referring to the convergence rate and delay. To enable edge intelligence, how to train DML efficiently has recently attracted extensive interest in wireless networks. Rather than over a fixed or centralized topology, we study the convergence and system implementation of DML over a wireless device-to-device (D2D) network in this paper. First, we introduce the DML training process and system model in this network, where the total delay in reaching convergence is minimized. Second, we analyze the convergence rate to figure out the special effects of synchronization frequency and network topology. To improve the efficiency of DML training, we propose a system implementation approach to reduce the convergence rate and delay of one iteration, where the network topology and synchronization frequency are set, followed by an optimal resource allocation policy. At last, we perform experiments with an image classification task. Simulation results indicate that our proposed D2D framework can effectively reduce training delay and promote computation efficiency by 39 % in a heterogeneous environment, which is concordant with the theoretical analysis.