AvatarOne: Monocular 3D Human Animation

Abstract

Reconstructing realistic human avatars from monocular videos is a challenge that demands intricate modeling of 3D surface and articulation. In this paper, we introduce a comprehensive approach that synergizes three pivotal components: (1) a Signed Distance Field (SDF) representation with volume rendering and grid-based ray sampling to prune empty raysets, enabling efficient 3D reconstruction; (2) faster 3D surface reconstruction through a warmup stage for human surfaces, which ensures detailed modeling of body limbs; and (3) temporally consistent subjectspecific forward canonical skinning, which helps in retaining correspondences across frames, all of which can be trained in an end-to-end fashion under 30 mins. Leveraging warmup and grid-based ray marching, along with a faster voxel-based correspondence search, our model streamlines the computational demands of the problem. We further experiment with different sampling representations to improve ray radiance approximations and obtain a floater free surface. Through rigorous evaluation, we demonstrate that our method is on par with current techniques while offering novel insights and avenues for future research in 3D avatar modeling. This work showcases a fast and robust solution for both surface modeling and novel view animation.

Publication
Winter Conference on Applications of Computer Vision (WACV)

Toronto Intelligent Systems Lab Co-authors

Yash Kant
Yash Kant
PhD Student

I enjoy talking to people and building (hopefully useful) things together. :)

Igor Gilitschenski
Igor Gilitschenski
Assistant Professor