DreamActor-M1: Holistic, Expressive and Robust Human Image Animation with Hybrid Guidance
Abstract
While recent image-based human animation methods achieve realistic body and facial motion synthesis, critical gaps remain in fine-grained holistic controllability, multi-scale adaptability, and long-term temporal coherence, which leads to their lower expressiveness and robustness. We propose a diffusion transformer (DiT) based framework, DreamActor-M1, with hybrid guidance to overcome these limitations. For motion guidance, our hybrid control signals that integrate implicit facial representations, 3D head spheres, and 3D body skeletons achieve robust control of facial expressions and body movements, while producing expressive and identity-preserving animations. For scale adaptation, to handle various body poses and image scales ranging from portraits to full-body views, we employ a progressive training strategy using data with varying resolutions and scales. For appearance guidance, we integrate motion patterns from sequential frames with complementary visual references, ensuring long-term temporal coherence for unseen regions during complex movements. Experiments demonstrate that our method outperforms the state-of-the-art works, delivering expressive results for portraits, upper-body, and full-body generation with robust long-term consistency. Project Page: https://grisoon.github.io/DreamActor-M1/.
Community
I immediately assume no code and no models and I was right
Will those work with M2 chips?
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ChatAnyone: Stylized Real-time Portrait Video Generation with Hierarchical Motion Diffusion Model (2025)
- HunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animation (2025)
- X-Dancer: Expressive Music to Human Dance Video Generation (2025)
- AudCast: Audio-Driven Human Video Generation by Cascaded Diffusion Transformers (2025)
- HumanDiT: Pose-Guided Diffusion Transformer for Long-form Human Motion Video Generation (2025)
- MVPortrait: Text-Guided Motion and Emotion Control for Multi-view Vivid Portrait Animation (2025)
- Versatile Multimodal Controls for Whole-Body Talking Human Animation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper