Collaborative Regression of Expressive Bodies using Moderation
Abstract:
Recovering expressive humans from images is essential for understanding human behavior. Methods that estimate 3D bodies, faces, or hands have progressed significantly, yet separately. Face methods recover accurate 3D shape and geometric details, but need a tight crop and struggle with extreme views and low resolution. Whole-body methods are robust to a wide range of poses and resolutions, but provide only a rough 3D face shape without details like wrinkles. To get the best of both worlds, we introduce PIXIE, which produces animatable, whole-body 3D avatars with realistic facial detail, from a single image. For this, PIXIE uses two key observations. First, existing work combines independent estimates from body, face, and hand experts, by trusting them equally. PIXIE introduces a novel moderator that merges the features of the experts, weighted by their confidence. All part experts can contribute to the whole, using SMPL-X’s shared shape space across all body parts. Second, human shape is highly correlated with gender, but existing work ignores this. We label training images as male, female, or non-binary, and train PIXIE to infer “gendered” 3D body shapes with a novel shape loss. In addition to 3D body pose and shape parameters, PIXIE estimates expression, illumination, albedo and 3D facial surface displacements. Quantitative and qualitative evaluation shows that PIXIE estimates more accurate whole-body shape and detailed face shape than the state of the art.
Video:
Code:
Referencing PIXIE
@conference{PIXIE:3DV:2021, title = {Collaborative Regression of Expressive Bodies using Moderation}, author = {Feng, Yao and Choutas, Vasileios and Bolkart, Timo and Tzionas, Dimitrios and Black, Michael}, booktitle = {International Conference on 3D Vision (3DV)}, pages = {792--804}, month = dec, year = {2021}, doi = {10.1109/3DV53792.2021.00088}, month_numeric = {12} }
Acknowledgments
We thank Victoria Fernandez Abrevaya, Yinghao Huang, Yuliang Xiu, Radek Danecek for discussions and Priyanka Patel for AGORA experiments.
This work was partially supported by the Max Planck ETH Center for Learning Systems.
Contact
For questions, please contact pixie@tue.mpg.de.
For commercial licensing, please contact sales@meshcapade.com.