Latency Oriented Secure Wireless Federated Learning A Channel Sharing Approach With Artificial Jammi

Latency Oriented Secure Wireless Federated Learning A Channel Sharing Approach With Artificial Jammi

Abstract:

As a promising framework for distributed machine learning (ML), wireless federated learning (FL) faces the threat of eavesdropping attacks when a trained ML model is sent over a radio channel. To address this threat, we propose channel-sharing-based artificial jamming to increase the secrecy throughput of FL clients (FCs). Specifically, when an FC performs local model training, a selected device such as a sensor node (SN) not involved in the FL opportunistically accesses the FC’s channel to transmit its sensing data. In return, when the FC sends its locally trained model to the FL server (FLS), the selected SN provides artificial jamming to increase the FC’s secrecy throughput. Considering multiple FCs and SNs, we first consider a given pairing of FCs and SNs and optimize the local training time, the model uploading time, and the transmit-power of the FCs to minimize the total latency of FL training. After proving the convexity of this optimization problem, we propose an efficient algorithm to derive the semi-analytical solution. Then, we further investigate the pairing of the FCs and the SNs to minimize a system-wise cost reflecting both energy consumption and latency. The resulting problem is a bicriteria pairing problem, and we propose an efficient algorithm to compute the optimal pairing solution. Numerical results demonstrate the efficiency and performance advantage of our proposed channel-sharing-based approach with artificial jamming in comparison with different benchmark schemes.