Papers
Topics
Authors
Recent
Search
2000 character limit reached

BCNet: Learning Body and Cloth Shape from A Single Image

Published 1 Apr 2020 in cs.CV and cs.GR | (2004.00214v2)

Abstract: In this paper, we consider the problem to automatically reconstruct garment and body shapes from a single near-front view RGB image. To this end, we propose a layered garment representation on top of SMPL and novelly make the skinning weight of garment independent of the body mesh, which significantly improves the expression ability of our garment model. Compared with existing methods, our method can support more garment categories and recover more accurate geometry. To train our model, we construct two large scale datasets with ground truth body and garment geometries as well as paired color images. Compared with single mesh or non-parametric representation, our method can achieve more flexible control with separate meshes, makes applications like re-pose, garment transfer, and garment texture mapping possible. Code and some data is available at https://github.com/jby1993/BCNet.

Citations (155)

Summary

  • The paper introduces BCNet, which accurately reconstructs detailed body and garment shapes from a single image using a novel layered garment representation on the SMPL model.
  • The paper leverages an innovative network architecture with a shared skinning weight network and a displacement network that uses graph convolutions to capture fine garment details.
  • The experimental results on two large-scale datasets demonstrate that BCNet outperforms state-of-the-art methods in accuracy and flexibility, enabling advanced virtual try-on and digital garment design.

BCNet: Learning Body and Cloth Shape from A Single Image

This paper presents BCNet, a novel approach aiming to address the challenging problem of reconstructing both body and garment shapes from a single near-front view RGB image. This task is particularly relevant for applications in virtual try-on, VR/AR environments, and the entertainment industry, where detailed and accurate digital representations of humans and their clothing are desired. Existing methods either fail to accurately recover complex garment geometries or are not flexible enough to handle a variety of garment types. BCNet proposes a layered garment representation on top of the SMPL (Skinned Multi-Person Linear) model and introduces a method for computing garment skinning weights independently of the body mesh, thus enhancing the model's flexibility and expression capabilities.

The paper introduces two new large-scale datasets containing paired RGB images with ground truth body and garment geometries for model training. Moreover, it proposes an innovative network architecture that includes a shared skinning weight network applicable to diverse garment topologies and a displacement network to capture garment details. The architecture leverages graph convolution networks to facilitate feature learning across the garment mesh, which are conditioned on body and pose parameters.

Key findings highlight that BCNet can support six garment categories and is capable of reconstructing both tight-fitting and loose garments, such as skirts. This is enabled through a carefully designed algorithm pipeline that constructs dressed SMPL body data with different garment types and poses. The results demonstrate superior performance in reconstructing garment details compared with existing methods, achieving notable flexibility and accuracy without the need for multi-view input or semantic information. Quantitative evaluation on public datasets shows that BCNet outperforms state-of-the-art methods in terms of mean Euclidean distance between predicted and ground truth shapes.

Practically, BCNet allows for advanced operations such as garment transfer between images and re-posing, opening up applications in digital garment design and virtual fitting. Theoretically, the method provides a flexible framework for future research in 3D human digitization, especially towards handling complex garment geometries and multi-layered clothing with efficiency and accuracy.

This work contributes significantly to the domain by making the datasets and code publicly available, driving further research in digital human modeling and shape reconstruction tasks. Future developments could explore expanding the garment categories and improving the processing of complexities such as multi-layered clothing or adding dynamics to capture garment deformations during motion. The paper’s findings set a solid foundation for the development of more comprehensive models capable of accurately and efficiently capturing the intricate details of clothed humans from singular visual inputs.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.