Motion2Motion: Cross-topology Motion Transfer with Sparse Correspondence
Abstract
Motion2Motion is a training-free framework that efficiently transfers animations between characters with different skeletal topologies using sparse bone correspondences.
This work studies the challenge of transfer animations between characters whose skeletal topologies differ substantially. While many techniques have advanced retargeting techniques in decades, transfer motions across diverse topologies remains less-explored. The primary obstacle lies in the inherent topological inconsistency between source and target skeletons, which restricts the establishment of straightforward one-to-one bone correspondences. Besides, the current lack of large-scale paired motion datasets spanning different topological structures severely constrains the development of data-driven approaches. To address these limitations, we introduce Motion2Motion, a novel, training-free framework. Simply yet effectively, Motion2Motion works with only one or a few example motions on the target skeleton, by accessing a sparse set of bone correspondences between the source and target skeletons. Through comprehensive qualitative and quantitative evaluations, we demonstrate that Motion2Motion achieves efficient and reliable performance in both similar-skeleton and cross-species skeleton transfer scenarios. The practical utility of our approach is further evidenced by its successful integration in downstream applications and user interfaces, highlighting its potential for industrial applications. Code and data are available at https://lhchen.top/Motion2Motion.
Community
This work studies the challenge of transferring animations between characters whose skeletal topologies differ substantially. While many techniques have advanced retargeting techniques in decades, transferring motions across diverse topologies remains less-explored. The primary obstacle lies in the inherent topological inconsistency between source and target skeletons, which restricts the establishment of straightforward one-to-one bone correspondences. Besides, the current lack of large-scale paired motion datasets spanning different topological structures severely constrains the development of data-driven approaches.
To address these limitations, we introduce Motion2Motion, a novel, training-free framework. Simply yet effectively, Motion2Motion works with only one or a few example motions on the target skeleton, by accessing a sparse set of bone correspondences between the source and target skeletons. Through comprehensive qualitative and quantitative evaluations, we demonstrate that Motion2Motion achieves efficient and reliable performance in both similar-skeleton and cross-species skeleton transfer scenarios.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- X-MoGen: Unified Motion Generation across Humans and Animals (2025)
- AnimaX: Animating the Inanimate in 3D with Joint Video-Pose Diffusion Models (2025)
- Behave Your Motion: Habit-preserved Cross-category Animal Motion Transfer (2025)
- Spatial-Temporal Multi-Scale Quantization for Flexible Motion Generation (2025)
- Puppeteer: Rig and Animate Your 3D Models (2025)
- Geometry Forcing: Marrying Video Diffusion and 3D Representation for Consistent World Modeling (2025)
- X-UniMotion: Animating Human Images with Expressive, Unified and Identity-Agnostic Motion Latents (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper