Motion Projection Consistency Based 3 D Human Pose Estimation With Virtual Bones From Monocular Vide

Motion Projection Consistency Based 3 D Human Pose Estimation With Virtual Bones From Monocular Vide

Abstract:

Real-time 3-D human pose estimation is crucial for human–computer interaction. It is cheap and practical to estimate 3-D human pose only from monocular video. However, the recent bone-splicing-based 3-D human pose estimation method brings about the problem of cumulative error. In this article, the concept of virtual bones is proposed to solve such a challenge. The virtual bones are imaginary bones between nonadjacent joints. They do not exist in reality, but they bring new loop constraints for the estimation of 3-D human joints. The proposed network in this article predicts real bones and virtual bones, simultaneously. The final length of real bones is constrained and learned by the loop constructed by the predicted real bones and virtual bones. Besides, the motion constraints of joints in consecutive frames are considered. The consistency between the 2-D projected position displacement predicted by the network and the captured real 2-D displacement by the camera is proposed as a new projection consistency loss for the learning of the 3-D human pose. The experiments on the Human3.6M data set demonstrate the good performance of the proposed method. Ablation studies demonstrate the effectiveness of the proposed interframe projection consistency constraints and intraframe loop constraints.