Abstract:
Motion planning is critical to realize the autonomous operation of mobile robots. As the complexity and randomness of robot application scenarios increase, the planning capability of the classical hierarchical motion planners is challenged. With the development of machine learning, the deep reinforcement learning (DRL)-based motion planner has gradually become a research hotspot due to its several advantageous feature. The DRL-based motion planner is model-free and does not rely on the prior structured map. Most importantly, the DRL-based motion planner achieves the unification of the global planner and the local planner. In this paper, we provide a systematic review of various motion planning methods. Firstly, we summarize the representative and state-of-the-art works for each submodule of the classical motion planning architecture and analyze their performance features. Then, we concentrate on summarizing reinforcement learning (RL)-based motion planning approaches, including motion planners combined with RL improvements, map-free RL-based motion planners, and multi-robot cooperative planning methods. Finally, we analyze the urgent challenges faced by these mainstream RL-based motion planners in detail, review some state-of-the-art works for these issues, and propose suggestions for future research.