Feedback Delay Tolerant Proactive Caching Scheme Based on Federated Learning at the Wireless Edge

Feedback Delay Tolerant Proactive Caching Scheme Based on Federated Learning at the Wireless Edge

Abstract:

Edge caching has emerged as a promising approach to meet explosive mobile data on 6G networks. One critical issue in edge caching is file popularity prediction. The federated learning (FL) based distributed algorithm is used to predict file popularity, which solves user privacy issues in centralized algorithms. However, due to the heterogeneity of user devices, the feedback time (i.e., the update and upload time of model parameters) is various for each user, which causes the feedback delay in the framework of FL-based edge caching. In this letter, we devise a feedback delay-tolerant proactive caching scheme (FLASH) based on FL in which each user utilizes hybrid filtering on stacked autoencoders to train the prediction model locally. Experimental results indicate that FLASH can effectively handle feedback delay and outperform counterpart algorithms in cache efficiency.