Abstract

Computer Graphics 2016: Scanned human body model realistic pose deformation - Shuaiyin Zhu - Hong Kong

Realistic physical body modeling is a crucial process in many research applications, including computer animation, computer vision, ergonomic application, or maybe biometric. In the recent decades, the modelling of dynamic body poses has gained many developments [1-3]. Among which, most of the design-focused methods [4-5] generate realistic appearance by deforming 3D characters into different poses. Since the design-focused methods mainly concern about the deformation speed and these methods focus more on global shape instead of local details, thus some deformation errors or distortions occur at the joint areas are expected when the range of movement is large. Insight of this, some example-based methods [6-7] are proposed to find out from an outsized range of scans of one subject in several poses a template deformable model. The pose deformation of example-based methods combines rigid deformation and non-rigid deformation, and thus is in a position to get natural skin appearance. However, these methods can only deform a parametric template model into different poses. For arbitrary models, like real human scanned models, it's very challenging to deform these models into different poses rapidly and realistically. During this paper, we propose a rapid skeleton embedding and deformation method for scanned human models. We first develop algorithms to automatically recognize important body features from a scanned model (i.e. a true human subject scan in the standard pose), from which we construct an in-depth framework for the scanned model. The detailed framework enables easy and accurate skin segmentation and skeleton embedding, and is then wont to drive the rigid deformation. Next, we train non-rigid deformation from a dataset of registered scans. We applied non-rigid deformation to correct rigid deformation within the initiative so on simulate the natural skin appearance of the scanned real subjects in several poses. Experimental work shows that the proposed method can generate realistic pose deformations for real subject scans. The tactic is often employed by the style industry, where accurate size measurements are the mandate, for various applications including fit design analysis.

Human parsing, namely the decomposition of a picture of the human subject into semantic body/clothing regions, is vital for general human-centric analysis, which is additionally an important process enabling high-level applications, including fashion style recognition and retrievals, human identifications, and human behavior analysis [1-3]. The prevailing methods for human parsing using deep neural networks have a variety of known drawbacks, e.g. not taking under consideration the limited capacity of deep learning techniques to delineate visual objects, labels confusions, very coarse output boundary, then forth. During this paper, we propose a part-detection based and conditional random fields (CRFs) embedded deep neural network to deal with the matter. Firstly, a rough semantic segmentation is conducted by utilizing a deep neural network. Secondly, a neighborhood detector is trained to supply class-specific scores for human parts and/or clothing item regions. Then, the outputs of the part detector are integrated into the deep neural network so as to optimize the feature learning within the deep neural network. Finally, to sharpen the boundaries and refine the segmentation results, CRFs-based probabilistic graphical modelling is incorporated into the deep neural network. In the meantime, the outputs from the part detector define the specific higher-order potentials which will successively improve the CRFs. We comprehensively evaluation of our method with two public datasets. The results demonstrate the effectiveness of our proposed framework is compared to the state-of-the-art methods. The pose deformation component of our model is acquired from a set of dense 3D scans of one person in multiple poses. A key aspect of our pose model is that it decouples deformation into a rigid and non-rigid component. The rigid component of deformation is described in terms of a coffee degree-of-freedom rigid body skeleton. The non-rigid component captures the remaining deformation such as flexing of the muscles. In our model, the deformation for a part depends only on the adjacent joints. Therefore, it’s relatively low dimensional, allowing the form deformation to be learned automatically, from limited training data. Our representation also models shape variation that happens across different individuals. This model component is often acquired from a set of 3D scans of various people in several poses. The shape variation is represented by using principal component analysis (PCA), which induces a low-dimensional subspace of body shape deformations. Importantly, the model of shape variation does not get confounded by deformations thanks to posing, as those are accounted for separately. The 2 parts of the model form a single unified framework for shape variability of individuals. The framework can be wont to generate an entire surface mesh given only a succinct specification of the specified shape — the angles of the human skeleton and therefore the Eigen coefficients describing the body shape. The model has many advantages over previous de-formable body models utilized in computer vision. In particular, since it's learned from a database of human shapes it captures the correlations between the sizes and shapes of different body parts. It also captures a good range of human forms and shape deformations thanks to pose. Modeling how the form varies with pose reduces problems of other approaches related to modeling the body shape at the joints between parts.


Author(s): Shuaiyin Zhu

Abstract | PDF

Share This Article