Learning to Estimate the Body Shape Under Clothing from a Single 3D Scan This publication appears in: IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS Authors: P. Hu, N. Nourbakhsh, V. Dadarlat and A. Munteanu Publication Date: Aug. 2020
Abstract: Estimating the 3D human body shape and pose under clothing is important for many applications, including virtualtry-on, non-contact body measurement, and avatar creation forvirtual reality. Existing body shape estimation methods formulatethis task as an optimization problem by fitting a parametric bodymodel to a single dressed-human scan or a sequence of dressedhuman meshes for a better accuracy. This is impractical for manyapplications that require fast acquisition such as gaming andvirtual try-on due to the expensive computation. In this paper, wepropose the first learning-based approach to estimate the humanbody shape under clothing from a single dressed-human scan,dubbed Body PointNet. The proposed Body PointNet operatesdirectly on raw point clouds and predicts the undressed body ina coarse-to-fine manner. Due to the nature of the data alignedpaired dressed scans and undressed bodies and genusǂ manifoldmeshes (i.e. single-layer surfaces) we face a major challenge oflacking training data. To address this challenge, we propose anovel method to synthesize the dressed-human pseudo-scans andcorresponding ground truth bodies. A new large-scale dataset,dubbed BUG (Body Under virtual Garments) is presented,employed for the learning task of body shape estimation from 3Ddressed-human scans. Comprehensive evaluations show that theproposed Body PointNet outperforms the state-of-the-art methodsin terms of both accuracy and running time.
|