InterActHuman: Multi-Concept Human Animation with Layout-Aligned Audio Conditions
Abstract
A new framework enables precise, per-identity control of multiple concepts in end-to-end human animation by enforcing region-specific binding of multi-modal conditions.
End-to-end human animation with rich multi-modal conditions, e.g., text, image and audio has achieved remarkable advancements in recent years. However, most existing methods could only animate a single subject and inject conditions in a global manner, ignoring scenarios that multiple concepts could appears in the same video with rich human-human interactions and human-object interactions. Such global assumption prevents precise and per-identity control of multiple concepts including humans and objects, therefore hinders applications. In this work, we discard the single-entity assumption and introduce a novel framework that enforces strong, region-specific binding of conditions from modalities to each identity's spatiotemporal footprint. Given reference images of multiple concepts, our method could automatically infer layout information by leveraging a mask predictor to match appearance cues between the denoised video and each reference appearance. Furthermore, we inject local audio condition into its corresponding region to ensure layout-aligned modality matching in a iterative manner. This design enables the high-quality generation of controllable multi-concept human-centric videos. Empirical results and ablation studies validate the effectiveness of our explicit layout control for multi-modal conditions compared to implicit counterparts and other existing methods.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- HunyuanVideo-HOMA: Generic Human-Object Interaction in Multimodal Driven Human Animation (2025)
- Let Them Talk: Audio-Driven Multi-Person Conversational Video Generation (2025)
- HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation (2025)
- DyST-XL: Dynamic Layout Planning and Content Control for Compositional Text-to-Video Generation (2025)
- AnimeShooter: A Multi-Shot Animation Dataset for Reference-Guided Video Generation (2025)
- HunyuanVideo-Avatar: High-Fidelity Audio-Driven Human Animation for Multiple Characters (2025)
- AnimateAnywhere: Rouse the Background in Human Image Animation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper