Abstract:
Unmanned Aerial Vehicles (UAVs) can be used as aerial base stations for data collection in next-generation wireless networks due to their high adaptability and maneuverability. This paper investigates the scenario where multiple UAVs cooperatively fly over heterogeneous ground users (GUs) and collect data without a central controller. With the consideration of signal-to-interference-and-noise ratio (SINR) and fairness among users, we jointly optimize the trajectories of UAVs and the GUs associations to maximize the total throughput and energy efficiency. We formulate the long-term optimization problem as a decentralized partially observed Markov decision processes (DEC-POMDP) and derive an approach combining the coalition formation game (CFG) and multi-agent deep reinforcement learning (MADRL). We first formulate the discrete association scheduling problem as a non-cooperative theoretical game and use the CFG algorithm to achieve a decentralized scheme converging to Nash equilibrium (NE). Then, a MARL-based technique is developed to optimize the trajectories and energy consumption continuously in a centralized-training but decentralized-execution manner. Simulation results demonstrate that the proposed algorithm outperforms the commonly used schemes in the literature, regarding the fair throughput and energy consumption in a distributed manner.