Abstract:
Human mesh recovery from one single image has achieved rapid progress recently, but many methods suffer from the image appearance overfitting since the training data are collected along with accurate 3D annotations in controlled settings of monotonous backgrounds or simple clothes. Some methods regress human mesh vertices from poses to tackle the above problem. However the mesh topologies have not been well exploited, and artifacts are often generated. In this paper, we aim to find an efficient low-cost solution to human mesh reconstruction. To this end, we propose a Progressive Quadric Graph Convolutional Network (PQ-GCN), and design a simple and fast method for 3D human mesh recovery from a single image in the wild. Specifically, we apply quadric-based surface simplification to human meshes and design a progressive graph convolution network, accompanied by mesh feature up-sampling, to deal with the mesh topologies. We carry out a series of studies to validate our method. The results prove that our method achieves superior performance on a challenging in-the-wild dataset, while using 66% fewer parameters than the existing method, Pose2Mesh. Artifacts have also been eliminated and better visual quality has been obtained without any further post-processing and model fitting. Besides, the recovery can be stopped at an earlier stage by adding a decoder head. Consequently, the computational complexity can be reduced greatly.