1Tsinghua Shenzhen International Graduate School, Tsinghua University
2Tencent AI Lab
3The Hong Kong University of Science and Technology
*
Corresponding author
We propose UV Gaussians, which model the 3D human body by jointly learning mesh deformations and 2D UV-space Gaussian textures. Rather than optimizing the properties of Gaussians points in 3D space, we utilize the embedding of UV map to learn Gaussian textures in 2D space, leveraging the capabilities of powerful 2D networks to extract features. Additionally, through an independent Mesh network, we optimize pose-dependent geometric deformations, thereby guiding Gaussian rendering and significantly enhancing rendering quality. We collect and process a new dataset of human motion, which includes multi-view images, scanned models, parametric model registration, and corresponding texture maps. Experimental results demonstrate that our method achieves state-of-the-art synthesis of novel view and novel pose.
Overview of our method, which comprises three primary modules: a Mesh U-Net for learning pose-dependent mesh deformation, a Gaussian U-Net for learning pose-dependent Gaussian textures, and a Mesh-Guided 3D Gaussian Animation for animating the Gaussian guided by the mesh.
@article{jiang2024uv,
title={UV Gaussians: Joint Learning of Mesh Deformation and Gaussian Textures for Human Avatar Modeling},
author={Jiang, Yujiao and Liao, Qingmin and Li, Xiaoyu and Ma, Li and Zhang, Qi and Zhang, Chaopeng and Lu, Zongqing and Shan, Ying},
journal={arXiv preprint arXiv:2403.11589},
year={2024}
}