AnimPortrait3D

Text-based Animatable 3D Avatars
with Morphable Model Alignment

ACM SIGGRAPH 2025 (Conference Track)
1ETH Zürich, 2State Key Lab of CAD&CG, Zhejiang University

AnimPortrait3D creates animatable 3D avatars from text descriptions, achieving realistic appearance and geometry while maintaining accurate alignment with the underlying parametric mesh. Prompt: "A 40-year-old gentleman with a chiseled jawline, slightly graying hair slicked back into a neat undercut, dressed in a navy pinstriped suit, a red tie complementing his fair complexion - confidently walking through a bustling city street".

Abstract

The generation of high-quality, animatable 3D head avatars from text has enormous potential in content creation applications such as games, movies, and embodied virtual assistants. Current text-to-3D generation methods typically combine parametric head models with 2D diffusion models using score distillation sampling to produce 3D-consistent results. However, they struggle to synthesize realistic details and suffer from misalignments between the appearance and the driving parametric model, resulting in unnatural animation results. We discovered that these limitations stem from ambiguities in the 2D diffusion predictions during 3D avatar distillation, specifically: (i) the avatar's appearance and geometry is underconstrained by the text input, and (ii) the semantic alignment between the predictions and the parametric head model is insufficient because the diffusion model is unaware of the parametric model.

In this work, we propose a novel framework, AnimPortrait3D, for text-based realistic animatable 3DGS avatar generation with morphable model alignment, and introduce two key strategies to address these challenges. First, we tackle appearance and geometry ambiguities by utilizing prior information from a pretrained text-to-3D model to initialize a 3D avatar with robust appearance, geometry, and rigging relationships to the morphable model. Second, we refine the initial 3D avatar for dynamic expressions using a ControlNet that is conditioned on semantic and normal maps of the morphable model to ensure accurate alignment. As a result, our method outperforms existing approaches in terms of synthesis quality, alignment, and animation fidelity. Our experiments show that the proposed method advances the state of the art in text-based, animatable 3D head avatar generation.

Video

Pipeline

Overview of AnimPortrait3D. Given an input text, the 3D Avatar Initialization stage generates a well-defined initial avatar that provides appearance and geometry prior information, and is rigged to SMPL-X for animation. During the Dynamic Optimization stage, we optimize the avatar for dynamic poses and expressions using a 2D diffusion model. We first pre-train the eye and mouth regions, then optimize the full avatar and apply a final refinement strategy to produce the final result. AnimPortrait3D is able to generate avatars with diverse appearances, ethnicities, and ages.

Pipeline

Comparison

Qualitative comparison to SOTA text-to-3D approaches: HeadStudio, TADA, HumanGaussian, PortraitGen, GPAvatar, GAGAvatar, and our method. While other methods take a text prompt as input (shown at the top), GPAvatar and GAGAvatar use an image as input. The reference images are sourced from the video data in the VFHQ dataset.

Resources

Gallery of Results

We provide a gallery of results generated by AnimPortrait3D on a diverse set of text prompts. Each result is accompanied by the input text prompt and the generated animatable 3D avatar. The preview image and the results can be found at huggingface.

Comparison

Pre-trained ControlNet

This ControlNet can generate high-quality RGB images for facial, mouth, and eye regions, leveraging their respective conditional inputs (text, normal map and segmentation map). The pre-trained model can be found at huggingface.

Comparison

Reconstructed Face Motions for Animation

We provide some motion sequences reconstructed from VFHQ dataset using VHAP, please refer to google drive for download.

Interactive Rendering

We provide an interactive user interface for animating our generated avatars using motion sequences or custom parameters, with real-time visualization of the underlying mesh. For more details, please visit our github repo.

Related Links

Portrait3D is a novel neural rendering-based framework with a novel joint geometry-appearance prior to achieve high-quality 3D static portrait generation from text input. We use Portrait3D to provide the initial 3D avatar for AnimPortrait3D.

What's is new compared to Portrait3D?

  1. Higher quality appearance.
  2. More realistic geometry.
  3. Capabilities for dynamic expressions.

BibTeX

If you find this project helpful to your research, please consider citing:
@article{AnimPortrait3D_sig25,
      author = {Wu, Yiqian and Prinzler, Malte and Jin, Xiaogang and Tang, Siyu},
      title = {Text-based Animatable 3D Avatars with Morphable Model Alignment},
      year = {2025}, 
      isbn = {9798400715402}, 
      publisher = {Association for Computing Machinery},
      address = {New York, NY, USA},
      url = {https://doi.org/10.1145/3721238.3730680},
      doi = {10.1145/3721238.3730680},
      articleno = {},
      numpages = {11},
      location = {Vancouver, BC, Canada},
      series = {SIGGRAPH '25}
}