Abstract:
Cable-driven parallel robots (CDPRs) have complex cable dynamics and working environment uncertainties, which bring challenges to the precise control of CDPRs. This article introduces the reinforcement learning to offset the negative effect on the control performance of CDPRs resulting from the uncertainties. The problem of controller design for CDPRs in the framework of deep reinforcement learning is investigated. A learning-based control algorithm is proposed to compensate for uncertainties due to cable elasticity, mechanical friction, etc. A basic control law is given for the nominal model, and a Lyapunov-based deep reinforcement learning control law is designed. Moreover, the stability of the closed-loop tracking system under the reinforcement learning algorithm is proved. Both simulations and experiments validate the effectiveness and advantages of the proposed control algorithm.