Real Time Robotic Mirrored Behavior of Facial Expressions and Head Motions Based on Lightweight Netw

Real Time Robotic Mirrored Behavior of Facial Expressions and Head Motions Based on Lightweight Netw

Abstract:

The ability of a humanoid robot to imitate facial expressions with simultaneous head motions is crucial to natural human–robot interaction. This mirrored behavior from human beings to humanoid robots has high demands of similarity and real-time performance. To fulfill these needs, this article proposes a real-time robotic mirrored behavior of facial expressions and head motions based on lightweight networks. First, a humanoid robot that can change the state of its facial organs and neck through servo displacement is developed to achieve the mirrored behavior of facial expressions and head motions. Second, to overcome the high latency caused by deep learning models running in embedded devices, a lightweight deep learning network is constructed for detecting facial feature points, which can reduce model size and improve running speed without affecting the performance of the model. Finally, a mapping relationship of 68 facial feature points to optimal servo displacements is established to realize the mirrored behavior from human beings to humanoid robots. The experimental results show that the facial feature point recognition method based on the lightweight model performs better than other state-of-the-art methods, and our head motion tracking method can maintain high accuracy compared with the gold standard optical motion capture system NOKOV. Overall, our method ensures the accurate and real-time generation of robot mirrored behavior and has a certain reference value for the efficient and natural interaction between humans and robots.